00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3477 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3088 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.076 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.076 The recommended git tool is: git 00:00:00.077 using credential 00000000-0000-0000-0000-000000000002 00:00:00.078 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.145 Fetching changes from the remote Git repository 00:00:00.146 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.192 Using shallow fetch with depth 1 00:00:00.192 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.192 > git --version # timeout=10 00:00:00.225 > git --version # 'git version 2.39.2' 00:00:00.225 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.226 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.226 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.240 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.252 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.265 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:05.265 > git config core.sparsecheckout # timeout=10 00:00:05.278 > git read-tree -mu HEAD # timeout=10 00:00:05.295 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:05.315 Commit message: "inventory/dev: add missing long names" 00:00:05.315 > git rev-list --no-walk c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=10 00:00:05.402 [Pipeline] Start of Pipeline 00:00:05.416 [Pipeline] library 00:00:05.418 Loading library shm_lib@master 00:00:05.418 Library shm_lib@master is cached. Copying from home. 00:00:05.435 [Pipeline] node 00:00:20.437 Still waiting to schedule task 00:00:20.438 Waiting for next available executor on ‘vagrant-vm-host’ 00:06:46.662 Running on VM-host-SM4 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:06:46.663 [Pipeline] { 00:06:46.675 [Pipeline] catchError 00:06:46.676 [Pipeline] { 00:06:46.690 [Pipeline] wrap 00:06:46.701 [Pipeline] { 00:06:46.709 [Pipeline] stage 00:06:46.711 [Pipeline] { (Prologue) 00:06:46.729 [Pipeline] echo 00:06:46.730 Node: VM-host-SM4 00:06:46.737 [Pipeline] cleanWs 00:06:46.745 [WS-CLEANUP] Deleting project workspace... 00:06:46.745 [WS-CLEANUP] Deferred wipeout is used... 00:06:46.750 [WS-CLEANUP] done 00:06:46.916 [Pipeline] setCustomBuildProperty 00:06:46.986 [Pipeline] nodesByLabel 00:06:46.988 Found a total of 1 nodes with the 'sorcerer' label 00:06:46.996 [Pipeline] httpRequest 00:06:46.999 HttpMethod: GET 00:06:46.999 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:06:47.002 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:06:47.004 Response Code: HTTP/1.1 200 OK 00:06:47.004 Success: Status code 200 is in the accepted range: 200,404 00:06:47.005 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:06:47.144 [Pipeline] sh 00:06:47.421 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:06:47.440 [Pipeline] httpRequest 00:06:47.443 HttpMethod: GET 00:06:47.444 URL: http://10.211.164.101/packages/spdk_253cca4fc3a89c38e79d2e940c5a0b7bb082afcc.tar.gz 00:06:47.444 Sending request to url: http://10.211.164.101/packages/spdk_253cca4fc3a89c38e79d2e940c5a0b7bb082afcc.tar.gz 00:06:47.445 Response Code: HTTP/1.1 200 OK 00:06:47.446 Success: Status code 200 is in the accepted range: 200,404 00:06:47.446 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk_253cca4fc3a89c38e79d2e940c5a0b7bb082afcc.tar.gz 00:06:49.593 [Pipeline] sh 00:06:49.873 + tar --no-same-owner -xf spdk_253cca4fc3a89c38e79d2e940c5a0b7bb082afcc.tar.gz 00:06:53.164 [Pipeline] sh 00:06:53.443 + git -C spdk log --oneline -n5 00:06:53.443 253cca4fc nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:06:53.443 c3870302f scripts/pkgdep: Fix install_shfmt() under FreeBSD 00:06:53.443 b65c4a87a scripts/pkgdep: Remove UADK from install_all_dependencies() 00:06:53.443 7a8d39909 Revert "test/common: Enable inherit_errexit" 00:06:53.443 4506c0c36 test/common: Enable inherit_errexit 00:06:53.464 [Pipeline] withCredentials 00:06:53.475 > git --version # timeout=10 00:06:53.485 > git --version # 'git version 2.39.2' 00:06:53.499 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:06:53.501 [Pipeline] { 00:06:53.510 [Pipeline] retry 00:06:53.512 [Pipeline] { 00:06:53.529 [Pipeline] sh 00:06:53.806 + git ls-remote http://dpdk.org/git/dpdk main 00:06:54.075 [Pipeline] } 00:06:54.097 [Pipeline] // retry 00:06:54.102 [Pipeline] } 00:06:54.120 [Pipeline] // withCredentials 00:06:54.131 [Pipeline] httpRequest 00:06:54.135 HttpMethod: GET 00:06:54.135 URL: http://10.211.164.101/packages/dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:06:54.137 Sending request to url: http://10.211.164.101/packages/dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:06:54.139 Response Code: HTTP/1.1 200 OK 00:06:54.139 Success: Status code 200 is in the accepted range: 200,404 00:06:54.139 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:06:55.221 [Pipeline] sh 00:06:55.499 + tar --no-same-owner -xf dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:06:56.885 [Pipeline] sh 00:06:57.162 + git -C dpdk log --oneline -n5 00:06:57.162 7e06c0de19 examples: move alignment attribute on types for MSVC 00:06:57.162 27595cd830 drivers: move alignment attribute on types for MSVC 00:06:57.162 0efea35a2b app: move alignment attribute on types for MSVC 00:06:57.162 e2e546ab5b version: 24.07-rc0 00:06:57.162 a9778aad62 version: 24.03.0 00:06:57.182 [Pipeline] writeFile 00:06:57.215 [Pipeline] sh 00:06:57.522 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:06:57.535 [Pipeline] sh 00:06:57.815 + cat autorun-spdk.conf 00:06:57.816 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:57.816 SPDK_TEST_NVMF=1 00:06:57.816 SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:57.816 SPDK_TEST_URING=1 00:06:57.816 SPDK_TEST_USDT=1 00:06:57.816 SPDK_RUN_UBSAN=1 00:06:57.816 NET_TYPE=virt 00:06:57.816 SPDK_TEST_NATIVE_DPDK=main 00:06:57.816 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:06:57.816 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:57.822 RUN_NIGHTLY=1 00:06:57.824 [Pipeline] } 00:06:57.840 [Pipeline] // stage 00:06:57.859 [Pipeline] stage 00:06:57.861 [Pipeline] { (Run VM) 00:06:57.876 [Pipeline] sh 00:06:58.155 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:06:58.155 + echo 'Start stage prepare_nvme.sh' 00:06:58.155 Start stage prepare_nvme.sh 00:06:58.155 + [[ -n 8 ]] 00:06:58.155 + disk_prefix=ex8 00:06:58.155 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 ]] 00:06:58.155 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf ]] 00:06:58.155 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf 00:06:58.155 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:58.155 ++ SPDK_TEST_NVMF=1 00:06:58.155 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:58.155 ++ SPDK_TEST_URING=1 00:06:58.155 ++ SPDK_TEST_USDT=1 00:06:58.155 ++ SPDK_RUN_UBSAN=1 00:06:58.155 ++ NET_TYPE=virt 00:06:58.155 ++ SPDK_TEST_NATIVE_DPDK=main 00:06:58.155 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:06:58.155 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:58.155 ++ RUN_NIGHTLY=1 00:06:58.155 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:06:58.155 + nvme_files=() 00:06:58.155 + declare -A nvme_files 00:06:58.155 + backend_dir=/var/lib/libvirt/images/backends 00:06:58.155 + nvme_files['nvme.img']=5G 00:06:58.155 + nvme_files['nvme-cmb.img']=5G 00:06:58.155 + nvme_files['nvme-multi0.img']=4G 00:06:58.155 + nvme_files['nvme-multi1.img']=4G 00:06:58.155 + nvme_files['nvme-multi2.img']=4G 00:06:58.155 + nvme_files['nvme-openstack.img']=8G 00:06:58.155 + nvme_files['nvme-zns.img']=5G 00:06:58.155 + (( SPDK_TEST_NVME_PMR == 1 )) 00:06:58.155 + (( SPDK_TEST_FTL == 1 )) 00:06:58.155 + (( SPDK_TEST_NVME_FDP == 1 )) 00:06:58.155 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:06:58.155 + for nvme in "${!nvme_files[@]}" 00:06:58.155 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:06:58.155 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:06:58.155 + for nvme in "${!nvme_files[@]}" 00:06:58.155 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:06:58.155 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:06:58.155 + for nvme in "${!nvme_files[@]}" 00:06:58.155 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:06:58.413 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:06:58.413 + for nvme in "${!nvme_files[@]}" 00:06:58.413 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:06:58.413 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:06:58.413 + for nvme in "${!nvme_files[@]}" 00:06:58.413 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:06:58.671 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:06:58.671 + for nvme in "${!nvme_files[@]}" 00:06:58.671 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:06:58.671 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:06:58.671 + for nvme in "${!nvme_files[@]}" 00:06:58.671 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:06:59.606 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:06:59.606 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:06:59.606 + echo 'End stage prepare_nvme.sh' 00:06:59.606 End stage prepare_nvme.sh 00:06:59.619 [Pipeline] sh 00:06:59.973 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:06:59.973 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme.img -b /var/lib/libvirt/images/backends/ex8-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img -H -a -v -f fedora38 00:06:59.973 00:06:59.973 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant 00:06:59.973 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk 00:06:59.973 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:06:59.973 HELP=0 00:06:59.973 DRY_RUN=0 00:06:59.973 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme.img,/var/lib/libvirt/images/backends/ex8-nvme-multi0.img, 00:06:59.973 NVME_DISKS_TYPE=nvme,nvme, 00:06:59.973 NVME_AUTO_CREATE=0 00:06:59.973 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img, 00:06:59.973 NVME_CMB=,, 00:06:59.973 NVME_PMR=,, 00:06:59.973 NVME_ZNS=,, 00:06:59.973 NVME_MS=,, 00:06:59.973 NVME_FDP=,, 00:06:59.973 SPDK_VAGRANT_DISTRO=fedora38 00:06:59.973 SPDK_VAGRANT_VMCPU=10 00:06:59.973 SPDK_VAGRANT_VMRAM=12288 00:06:59.973 SPDK_VAGRANT_PROVIDER=libvirt 00:06:59.973 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:06:59.973 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:06:59.973 SPDK_OPENSTACK_NETWORK=0 00:06:59.973 VAGRANT_PACKAGE_BOX=0 00:06:59.973 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:06:59.973 FORCE_DISTRO=true 00:06:59.973 VAGRANT_BOX_VERSION= 00:06:59.973 EXTRA_VAGRANTFILES= 00:06:59.973 NIC_MODEL=e1000 00:06:59.973 00:06:59.973 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt' 00:06:59.973 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:07:03.254 Bringing machine 'default' up with 'libvirt' provider... 00:07:03.819 ==> default: Creating image (snapshot of base box volume). 00:07:03.819 ==> default: Creating domain with the following settings... 00:07:03.819 ==> default: -- Name: fedora38-38-1.6-1701806725-069-updated-1701632595-patched-kernel_default_1715779516_3e55459deb60f20ece15 00:07:03.819 ==> default: -- Domain type: kvm 00:07:03.819 ==> default: -- Cpus: 10 00:07:03.819 ==> default: -- Feature: acpi 00:07:03.819 ==> default: -- Feature: apic 00:07:03.819 ==> default: -- Feature: pae 00:07:03.819 ==> default: -- Memory: 12288M 00:07:03.819 ==> default: -- Memory Backing: hugepages: 00:07:03.819 ==> default: -- Management MAC: 00:07:03.819 ==> default: -- Loader: 00:07:03.820 ==> default: -- Nvram: 00:07:03.820 ==> default: -- Base box: spdk/fedora38 00:07:03.820 ==> default: -- Storage pool: default 00:07:03.820 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1701806725-069-updated-1701632595-patched-kernel_default_1715779516_3e55459deb60f20ece15.img (20G) 00:07:03.820 ==> default: -- Volume Cache: default 00:07:03.820 ==> default: -- Kernel: 00:07:03.820 ==> default: -- Initrd: 00:07:03.820 ==> default: -- Graphics Type: vnc 00:07:03.820 ==> default: -- Graphics Port: -1 00:07:03.820 ==> default: -- Graphics IP: 127.0.0.1 00:07:03.820 ==> default: -- Graphics Password: Not defined 00:07:03.820 ==> default: -- Video Type: cirrus 00:07:03.820 ==> default: -- Video VRAM: 9216 00:07:03.820 ==> default: -- Sound Type: 00:07:03.820 ==> default: -- Keymap: en-us 00:07:03.820 ==> default: -- TPM Path: 00:07:03.820 ==> default: -- INPUT: type=mouse, bus=ps2 00:07:03.820 ==> default: -- Command line args: 00:07:03.820 ==> default: -> value=-device, 00:07:03.820 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:07:03.820 ==> default: -> value=-drive, 00:07:03.820 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-0-drive0, 00:07:03.820 ==> default: -> value=-device, 00:07:03.820 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:03.820 ==> default: -> value=-device, 00:07:03.820 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:07:03.820 ==> default: -> value=-drive, 00:07:03.820 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:07:03.820 ==> default: -> value=-device, 00:07:03.820 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:03.820 ==> default: -> value=-drive, 00:07:03.820 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:07:03.820 ==> default: -> value=-device, 00:07:03.820 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:03.820 ==> default: -> value=-drive, 00:07:03.820 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:07:03.820 ==> default: -> value=-device, 00:07:03.820 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:04.080 ==> default: Creating shared folders metadata... 00:07:04.080 ==> default: Starting domain. 00:07:06.026 ==> default: Waiting for domain to get an IP address... 00:07:24.131 ==> default: Waiting for SSH to become available... 00:07:24.131 ==> default: Configuring and enabling network interfaces... 00:07:27.419 default: SSH address: 192.168.121.59:22 00:07:27.419 default: SSH username: vagrant 00:07:27.419 default: SSH auth method: private key 00:07:29.948 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:07:38.092 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:07:43.353 ==> default: Mounting SSHFS shared folder... 00:07:45.920 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:07:45.920 ==> default: Checking Mount.. 00:07:46.854 ==> default: Folder Successfully Mounted! 00:07:46.854 ==> default: Running provisioner: file... 00:07:47.788 default: ~/.gitconfig => .gitconfig 00:07:48.354 00:07:48.354 SUCCESS! 00:07:48.354 00:07:48.354 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:07:48.355 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:07:48.355 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:07:48.355 00:07:48.366 [Pipeline] } 00:07:48.384 [Pipeline] // stage 00:07:48.392 [Pipeline] dir 00:07:48.393 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora38-libvirt 00:07:48.394 [Pipeline] { 00:07:48.408 [Pipeline] catchError 00:07:48.410 [Pipeline] { 00:07:48.426 [Pipeline] sh 00:07:48.700 + + vagrant ssh-config --host vagrant 00:07:48.700 sed -ne /^Host/,$p 00:07:48.700 + tee ssh_conf 00:07:52.879 Host vagrant 00:07:52.879 HostName 192.168.121.59 00:07:52.879 User vagrant 00:07:52.879 Port 22 00:07:52.879 UserKnownHostsFile /dev/null 00:07:52.879 StrictHostKeyChecking no 00:07:52.879 PasswordAuthentication no 00:07:52.879 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1701806725-069-updated-1701632595-patched-kernel/libvirt/fedora38 00:07:52.879 IdentitiesOnly yes 00:07:52.879 LogLevel FATAL 00:07:52.879 ForwardAgent yes 00:07:52.879 ForwardX11 yes 00:07:52.879 00:07:52.894 [Pipeline] withEnv 00:07:52.898 [Pipeline] { 00:07:52.919 [Pipeline] sh 00:07:53.195 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:07:53.195 source /etc/os-release 00:07:53.195 [[ -e /image.version ]] && img=$(< /image.version) 00:07:53.195 # Minimal, systemd-like check. 00:07:53.195 if [[ -e /.dockerenv ]]; then 00:07:53.195 # Clear garbage from the node's name: 00:07:53.195 # agt-er_autotest_547-896 -> autotest_547-896 00:07:53.195 # $HOSTNAME is the actual container id 00:07:53.195 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:07:53.195 if mountpoint -q /etc/hostname; then 00:07:53.195 # We can assume this is a mount from a host where container is running, 00:07:53.195 # so fetch its hostname to easily identify the target swarm worker. 00:07:53.195 container="$(< /etc/hostname) ($agent)" 00:07:53.195 else 00:07:53.195 # Fallback 00:07:53.195 container=$agent 00:07:53.195 fi 00:07:53.195 fi 00:07:53.195 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:07:53.195 00:07:53.461 [Pipeline] } 00:07:53.512 [Pipeline] // withEnv 00:07:53.522 [Pipeline] setCustomBuildProperty 00:07:53.536 [Pipeline] stage 00:07:53.538 [Pipeline] { (Tests) 00:07:53.557 [Pipeline] sh 00:07:53.882 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:07:54.151 [Pipeline] timeout 00:07:54.151 Timeout set to expire in 40 min 00:07:54.153 [Pipeline] { 00:07:54.165 [Pipeline] sh 00:07:54.439 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:07:55.003 HEAD is now at 253cca4fc nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:07:55.013 [Pipeline] sh 00:07:55.289 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:07:55.570 [Pipeline] sh 00:07:55.844 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:07:56.115 [Pipeline] sh 00:07:56.393 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:07:56.653 ++ readlink -f spdk_repo 00:07:56.653 + DIR_ROOT=/home/vagrant/spdk_repo 00:07:56.653 + [[ -n /home/vagrant/spdk_repo ]] 00:07:56.653 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:07:56.653 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:07:56.653 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:07:56.653 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:07:56.653 + [[ -d /home/vagrant/spdk_repo/output ]] 00:07:56.653 + cd /home/vagrant/spdk_repo 00:07:56.653 + source /etc/os-release 00:07:56.653 ++ NAME='Fedora Linux' 00:07:56.653 ++ VERSION='38 (Cloud Edition)' 00:07:56.653 ++ ID=fedora 00:07:56.653 ++ VERSION_ID=38 00:07:56.653 ++ VERSION_CODENAME= 00:07:56.653 ++ PLATFORM_ID=platform:f38 00:07:56.653 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:07:56.653 ++ ANSI_COLOR='0;38;2;60;110;180' 00:07:56.653 ++ LOGO=fedora-logo-icon 00:07:56.653 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:07:56.653 ++ HOME_URL=https://fedoraproject.org/ 00:07:56.653 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:07:56.653 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:07:56.653 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:07:56.653 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:07:56.653 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:07:56.653 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:07:56.653 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:07:56.653 ++ SUPPORT_END=2024-05-14 00:07:56.653 ++ VARIANT='Cloud Edition' 00:07:56.653 ++ VARIANT_ID=cloud 00:07:56.653 + uname -a 00:07:56.653 Linux fedora38-cloud-1701806725-069-updated-1701632595 6.5.12-200.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Sun Dec 3 20:08:38 UTC 2023 x86_64 GNU/Linux 00:07:56.653 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:56.911 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:56.911 Hugepages 00:07:56.911 node hugesize free / total 00:07:56.911 node0 1048576kB 0 / 0 00:07:56.911 node0 2048kB 0 / 0 00:07:56.911 00:07:56.911 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:56.911 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:57.169 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:57.169 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:07:57.169 + rm -f /tmp/spdk-ld-path 00:07:57.169 + source autorun-spdk.conf 00:07:57.169 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:57.169 ++ SPDK_TEST_NVMF=1 00:07:57.169 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:57.169 ++ SPDK_TEST_URING=1 00:07:57.169 ++ SPDK_TEST_USDT=1 00:07:57.169 ++ SPDK_RUN_UBSAN=1 00:07:57.169 ++ NET_TYPE=virt 00:07:57.169 ++ SPDK_TEST_NATIVE_DPDK=main 00:07:57.170 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:07:57.170 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:57.170 ++ RUN_NIGHTLY=1 00:07:57.170 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:07:57.170 + [[ -n '' ]] 00:07:57.170 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:07:57.170 + for M in /var/spdk/build-*-manifest.txt 00:07:57.170 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:07:57.170 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:07:57.170 + for M in /var/spdk/build-*-manifest.txt 00:07:57.170 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:07:57.170 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:07:57.170 + for M in /var/spdk/build-*-manifest.txt 00:07:57.170 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:07:57.170 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:07:57.170 ++ uname 00:07:57.170 + [[ Linux == \L\i\n\u\x ]] 00:07:57.170 + sudo dmesg -T 00:07:57.170 + sudo dmesg --clear 00:07:57.170 + dmesg_pid=5757 00:07:57.170 + [[ Fedora Linux == FreeBSD ]] 00:07:57.170 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:57.170 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:57.170 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:07:57.170 + sudo dmesg -Tw 00:07:57.170 + [[ -x /usr/src/fio-static/fio ]] 00:07:57.170 + export FIO_BIN=/usr/src/fio-static/fio 00:07:57.170 + FIO_BIN=/usr/src/fio-static/fio 00:07:57.170 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:07:57.170 + [[ ! -v VFIO_QEMU_BIN ]] 00:07:57.170 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:07:57.170 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:57.170 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:57.170 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:07:57.170 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:57.170 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:57.170 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:07:57.170 Test configuration: 00:07:57.170 SPDK_RUN_FUNCTIONAL_TEST=1 00:07:57.170 SPDK_TEST_NVMF=1 00:07:57.170 SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:57.170 SPDK_TEST_URING=1 00:07:57.170 SPDK_TEST_USDT=1 00:07:57.170 SPDK_RUN_UBSAN=1 00:07:57.170 NET_TYPE=virt 00:07:57.170 SPDK_TEST_NATIVE_DPDK=main 00:07:57.170 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:07:57.170 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:57.428 RUN_NIGHTLY=1 13:26:10 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:57.428 13:26:10 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:07:57.428 13:26:10 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.428 13:26:10 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.428 13:26:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.428 13:26:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.428 13:26:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.428 13:26:10 -- paths/export.sh@5 -- $ export PATH 00:07:57.428 13:26:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.428 13:26:10 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:07:57.428 13:26:10 -- common/autobuild_common.sh@437 -- $ date +%s 00:07:57.428 13:26:10 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715779570.XXXXXX 00:07:57.428 13:26:10 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715779570.RJzaYd 00:07:57.428 13:26:10 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:07:57.428 13:26:10 -- common/autobuild_common.sh@443 -- $ '[' -n main ']' 00:07:57.428 13:26:10 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:07:57.428 13:26:10 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:07:57.428 13:26:10 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:07:57.428 13:26:10 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:07:57.428 13:26:10 -- common/autobuild_common.sh@453 -- $ get_config_params 00:07:57.428 13:26:10 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:07:57.428 13:26:10 -- common/autotest_common.sh@10 -- $ set +x 00:07:57.428 13:26:10 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:07:57.428 13:26:10 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:07:57.428 13:26:10 -- pm/common@17 -- $ local monitor 00:07:57.428 13:26:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:57.428 13:26:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:57.428 13:26:10 -- pm/common@25 -- $ sleep 1 00:07:57.428 13:26:10 -- pm/common@21 -- $ date +%s 00:07:57.428 13:26:10 -- pm/common@21 -- $ date +%s 00:07:57.428 13:26:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715779570 00:07:57.428 13:26:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715779570 00:07:57.428 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715779570_collect-vmstat.pm.log 00:07:57.428 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715779570_collect-cpu-load.pm.log 00:07:58.363 13:26:11 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:07:58.363 13:26:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:07:58.363 13:26:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:07:58.363 13:26:11 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:07:58.363 13:26:11 -- spdk/autobuild.sh@16 -- $ date -u 00:07:58.363 Wed May 15 01:26:11 PM UTC 2024 00:07:58.363 13:26:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:07:58.363 v24.05-pre-662-g253cca4fc 00:07:58.363 13:26:11 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:07:58.363 13:26:11 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:07:58.363 13:26:11 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:07:58.363 13:26:11 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:07:58.363 13:26:11 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:07:58.363 13:26:11 -- common/autotest_common.sh@10 -- $ set +x 00:07:58.363 ************************************ 00:07:58.363 START TEST ubsan 00:07:58.363 ************************************ 00:07:58.363 using ubsan 00:07:58.363 13:26:11 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:07:58.363 00:07:58.363 real 0m0.000s 00:07:58.363 user 0m0.000s 00:07:58.363 sys 0m0.000s 00:07:58.363 13:26:11 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:07:58.363 13:26:11 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:07:58.363 ************************************ 00:07:58.363 END TEST ubsan 00:07:58.363 ************************************ 00:07:58.363 13:26:11 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:07:58.363 13:26:11 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:07:58.363 13:26:11 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:07:58.363 13:26:11 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:07:58.363 13:26:11 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:07:58.363 13:26:11 -- common/autotest_common.sh@10 -- $ set +x 00:07:58.363 ************************************ 00:07:58.363 START TEST build_native_dpdk 00:07:58.363 ************************************ 00:07:58.363 13:26:11 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:07:58.363 7e06c0de19 examples: move alignment attribute on types for MSVC 00:07:58.363 27595cd830 drivers: move alignment attribute on types for MSVC 00:07:58.363 0efea35a2b app: move alignment attribute on types for MSVC 00:07:58.363 e2e546ab5b version: 24.07-rc0 00:07:58.363 a9778aad62 version: 24.03.0 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc0 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:07:58.363 13:26:11 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc0 21.11.0 00:07:58.363 13:26:11 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc0 '<' 21.11.0 00:07:58.363 13:26:11 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:07:58.363 13:26:11 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:07:58.363 13:26:11 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:07:58.363 13:26:11 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:07:58.363 13:26:11 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:07:58.363 13:26:11 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:07:58.363 13:26:11 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:07:58.363 13:26:11 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:07:58.363 13:26:11 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:07:58.363 13:26:11 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:07:58.363 13:26:11 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:07:58.363 13:26:11 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:07:58.363 13:26:11 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:07:58.363 13:26:11 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.363 13:26:11 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:07:58.363 13:26:11 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:07:58.363 13:26:11 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:07:58.363 13:26:11 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:07:58.363 13:26:11 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:07:58.363 13:26:11 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:07:58.622 13:26:11 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:07:58.622 13:26:11 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:07:58.622 13:26:11 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:07:58.622 13:26:11 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:07:58.622 13:26:11 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:07:58.622 13:26:11 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:07:58.622 13:26:11 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:07:58.622 patching file config/rte_config.h 00:07:58.622 Hunk #1 succeeded at 70 (offset 11 lines). 00:07:58.622 13:26:11 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:07:58.622 13:26:11 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:07:58.622 13:26:11 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:07:58.622 13:26:11 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:07:58.622 13:26:11 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:08:03.889 The Meson build system 00:08:03.889 Version: 1.3.0 00:08:03.889 Source dir: /home/vagrant/spdk_repo/dpdk 00:08:03.889 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:08:03.889 Build type: native build 00:08:03.889 Program cat found: YES (/usr/bin/cat) 00:08:03.889 Project name: DPDK 00:08:03.889 Project version: 24.07.0-rc0 00:08:03.889 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:08:03.889 C linker for the host machine: gcc ld.bfd 2.39-16 00:08:03.889 Host machine cpu family: x86_64 00:08:03.889 Host machine cpu: x86_64 00:08:03.889 Message: ## Building in Developer Mode ## 00:08:03.889 Program pkg-config found: YES (/usr/bin/pkg-config) 00:08:03.889 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:08:03.889 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:08:03.889 Program python3 found: YES (/usr/bin/python3) 00:08:03.889 Program cat found: YES (/usr/bin/cat) 00:08:03.889 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:08:03.889 Compiler for C supports arguments -march=native: YES 00:08:03.889 Checking for size of "void *" : 8 00:08:03.889 Checking for size of "void *" : 8 (cached) 00:08:03.889 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:08:03.889 Library m found: YES 00:08:03.889 Library numa found: YES 00:08:03.889 Has header "numaif.h" : YES 00:08:03.889 Library fdt found: NO 00:08:03.889 Library execinfo found: NO 00:08:03.889 Has header "execinfo.h" : YES 00:08:03.889 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:08:03.889 Run-time dependency libarchive found: NO (tried pkgconfig) 00:08:03.889 Run-time dependency libbsd found: NO (tried pkgconfig) 00:08:03.889 Run-time dependency jansson found: NO (tried pkgconfig) 00:08:03.889 Run-time dependency openssl found: YES 3.0.9 00:08:03.889 Run-time dependency libpcap found: YES 1.10.4 00:08:03.889 Has header "pcap.h" with dependency libpcap: YES 00:08:03.889 Compiler for C supports arguments -Wcast-qual: YES 00:08:03.889 Compiler for C supports arguments -Wdeprecated: YES 00:08:03.889 Compiler for C supports arguments -Wformat: YES 00:08:03.889 Compiler for C supports arguments -Wformat-nonliteral: NO 00:08:03.889 Compiler for C supports arguments -Wformat-security: NO 00:08:03.889 Compiler for C supports arguments -Wmissing-declarations: YES 00:08:03.889 Compiler for C supports arguments -Wmissing-prototypes: YES 00:08:03.889 Compiler for C supports arguments -Wnested-externs: YES 00:08:03.889 Compiler for C supports arguments -Wold-style-definition: YES 00:08:03.889 Compiler for C supports arguments -Wpointer-arith: YES 00:08:03.889 Compiler for C supports arguments -Wsign-compare: YES 00:08:03.889 Compiler for C supports arguments -Wstrict-prototypes: YES 00:08:03.889 Compiler for C supports arguments -Wundef: YES 00:08:03.889 Compiler for C supports arguments -Wwrite-strings: YES 00:08:03.889 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:08:03.889 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:08:03.889 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:08:03.889 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:08:03.889 Program objdump found: YES (/usr/bin/objdump) 00:08:03.889 Compiler for C supports arguments -mavx512f: YES 00:08:03.889 Checking if "AVX512 checking" compiles: YES 00:08:03.889 Fetching value of define "__SSE4_2__" : 1 00:08:03.889 Fetching value of define "__AES__" : 1 00:08:03.889 Fetching value of define "__AVX__" : 1 00:08:03.889 Fetching value of define "__AVX2__" : 1 00:08:03.889 Fetching value of define "__AVX512BW__" : 1 00:08:03.889 Fetching value of define "__AVX512CD__" : 1 00:08:03.889 Fetching value of define "__AVX512DQ__" : 1 00:08:03.889 Fetching value of define "__AVX512F__" : 1 00:08:03.889 Fetching value of define "__AVX512VL__" : 1 00:08:03.889 Fetching value of define "__PCLMUL__" : 1 00:08:03.889 Fetching value of define "__RDRND__" : 1 00:08:03.889 Fetching value of define "__RDSEED__" : 1 00:08:03.889 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:08:03.889 Compiler for C supports arguments -Wno-format-truncation: YES 00:08:03.889 Message: lib/log: Defining dependency "log" 00:08:03.889 Message: lib/kvargs: Defining dependency "kvargs" 00:08:03.889 Message: lib/argparse: Defining dependency "argparse" 00:08:03.889 Message: lib/telemetry: Defining dependency "telemetry" 00:08:03.889 Checking for function "getentropy" : NO 00:08:03.889 Message: lib/eal: Defining dependency "eal" 00:08:03.889 Message: lib/ring: Defining dependency "ring" 00:08:03.889 Message: lib/rcu: Defining dependency "rcu" 00:08:03.889 Message: lib/mempool: Defining dependency "mempool" 00:08:03.889 Message: lib/mbuf: Defining dependency "mbuf" 00:08:03.889 Fetching value of define "__PCLMUL__" : 1 (cached) 00:08:03.889 Fetching value of define "__AVX512F__" : 1 (cached) 00:08:03.889 Fetching value of define "__AVX512BW__" : 1 (cached) 00:08:03.889 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:08:03.889 Fetching value of define "__AVX512VL__" : 1 (cached) 00:08:03.889 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:08:03.889 Compiler for C supports arguments -mpclmul: YES 00:08:03.889 Compiler for C supports arguments -maes: YES 00:08:03.889 Compiler for C supports arguments -mavx512f: YES (cached) 00:08:03.889 Compiler for C supports arguments -mavx512bw: YES 00:08:03.889 Compiler for C supports arguments -mavx512dq: YES 00:08:03.889 Compiler for C supports arguments -mavx512vl: YES 00:08:03.889 Compiler for C supports arguments -mvpclmulqdq: YES 00:08:03.889 Compiler for C supports arguments -mavx2: YES 00:08:03.889 Compiler for C supports arguments -mavx: YES 00:08:03.889 Message: lib/net: Defining dependency "net" 00:08:03.889 Message: lib/meter: Defining dependency "meter" 00:08:03.889 Message: lib/ethdev: Defining dependency "ethdev" 00:08:03.889 Message: lib/pci: Defining dependency "pci" 00:08:03.889 Message: lib/cmdline: Defining dependency "cmdline" 00:08:03.889 Message: lib/metrics: Defining dependency "metrics" 00:08:03.889 Message: lib/hash: Defining dependency "hash" 00:08:03.889 Message: lib/timer: Defining dependency "timer" 00:08:03.889 Fetching value of define "__AVX512F__" : 1 (cached) 00:08:03.889 Fetching value of define "__AVX512VL__" : 1 (cached) 00:08:03.889 Fetching value of define "__AVX512CD__" : 1 (cached) 00:08:03.889 Fetching value of define "__AVX512BW__" : 1 (cached) 00:08:03.889 Message: lib/acl: Defining dependency "acl" 00:08:03.889 Message: lib/bbdev: Defining dependency "bbdev" 00:08:03.889 Message: lib/bitratestats: Defining dependency "bitratestats" 00:08:03.889 Run-time dependency libelf found: YES 0.190 00:08:03.889 Message: lib/bpf: Defining dependency "bpf" 00:08:03.889 Message: lib/cfgfile: Defining dependency "cfgfile" 00:08:03.889 Message: lib/compressdev: Defining dependency "compressdev" 00:08:03.889 Message: lib/cryptodev: Defining dependency "cryptodev" 00:08:03.889 Message: lib/distributor: Defining dependency "distributor" 00:08:03.889 Message: lib/dmadev: Defining dependency "dmadev" 00:08:03.889 Message: lib/efd: Defining dependency "efd" 00:08:03.889 Message: lib/eventdev: Defining dependency "eventdev" 00:08:03.889 Message: lib/dispatcher: Defining dependency "dispatcher" 00:08:03.889 Message: lib/gpudev: Defining dependency "gpudev" 00:08:03.889 Message: lib/gro: Defining dependency "gro" 00:08:03.889 Message: lib/gso: Defining dependency "gso" 00:08:03.889 Message: lib/ip_frag: Defining dependency "ip_frag" 00:08:03.889 Message: lib/jobstats: Defining dependency "jobstats" 00:08:03.889 Message: lib/latencystats: Defining dependency "latencystats" 00:08:03.889 Message: lib/lpm: Defining dependency "lpm" 00:08:03.889 Fetching value of define "__AVX512F__" : 1 (cached) 00:08:03.889 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:08:03.889 Fetching value of define "__AVX512IFMA__" : (undefined) 00:08:03.889 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:08:03.889 Message: lib/member: Defining dependency "member" 00:08:03.889 Message: lib/pcapng: Defining dependency "pcapng" 00:08:03.889 Compiler for C supports arguments -Wno-cast-qual: YES 00:08:03.889 Message: lib/power: Defining dependency "power" 00:08:03.889 Message: lib/rawdev: Defining dependency "rawdev" 00:08:03.889 Message: lib/regexdev: Defining dependency "regexdev" 00:08:03.889 Message: lib/mldev: Defining dependency "mldev" 00:08:03.889 Message: lib/rib: Defining dependency "rib" 00:08:03.889 Message: lib/reorder: Defining dependency "reorder" 00:08:03.889 Message: lib/sched: Defining dependency "sched" 00:08:03.889 Message: lib/security: Defining dependency "security" 00:08:03.889 Message: lib/stack: Defining dependency "stack" 00:08:03.889 Has header "linux/userfaultfd.h" : YES 00:08:03.889 Has header "linux/vduse.h" : YES 00:08:03.889 Message: lib/vhost: Defining dependency "vhost" 00:08:03.889 Message: lib/ipsec: Defining dependency "ipsec" 00:08:03.889 Message: lib/pdcp: Defining dependency "pdcp" 00:08:03.889 Fetching value of define "__AVX512F__" : 1 (cached) 00:08:03.889 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:08:03.889 Fetching value of define "__AVX512BW__" : 1 (cached) 00:08:03.889 Message: lib/fib: Defining dependency "fib" 00:08:03.889 Message: lib/port: Defining dependency "port" 00:08:03.889 Message: lib/pdump: Defining dependency "pdump" 00:08:03.889 Message: lib/table: Defining dependency "table" 00:08:03.889 Message: lib/pipeline: Defining dependency "pipeline" 00:08:03.889 Message: lib/graph: Defining dependency "graph" 00:08:03.889 Message: lib/node: Defining dependency "node" 00:08:03.889 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:08:03.889 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:08:03.889 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:08:03.889 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:08:05.285 Compiler for C supports arguments -Wno-sign-compare: YES 00:08:05.285 Compiler for C supports arguments -Wno-unused-value: YES 00:08:05.285 Compiler for C supports arguments -Wno-format: YES 00:08:05.285 Compiler for C supports arguments -Wno-format-security: YES 00:08:05.285 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:08:05.285 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:08:05.285 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:08:05.285 Compiler for C supports arguments -Wno-unused-parameter: YES 00:08:05.285 Fetching value of define "__AVX512F__" : 1 (cached) 00:08:05.285 Fetching value of define "__AVX512BW__" : 1 (cached) 00:08:05.285 Compiler for C supports arguments -mavx512f: YES (cached) 00:08:05.285 Compiler for C supports arguments -mavx512bw: YES (cached) 00:08:05.285 Compiler for C supports arguments -march=skylake-avx512: YES 00:08:05.285 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:08:05.285 Has header "sys/epoll.h" : YES 00:08:05.285 Program doxygen found: YES (/usr/bin/doxygen) 00:08:05.285 Configuring doxy-api-html.conf using configuration 00:08:05.285 Configuring doxy-api-man.conf using configuration 00:08:05.285 Program mandb found: YES (/usr/bin/mandb) 00:08:05.285 Program sphinx-build found: NO 00:08:05.285 Configuring rte_build_config.h using configuration 00:08:05.285 Message: 00:08:05.285 ================= 00:08:05.285 Applications Enabled 00:08:05.285 ================= 00:08:05.285 00:08:05.285 apps: 00:08:05.285 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:08:05.285 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:08:05.285 test-pmd, test-regex, test-sad, test-security-perf, 00:08:05.285 00:08:05.285 Message: 00:08:05.285 ================= 00:08:05.285 Libraries Enabled 00:08:05.285 ================= 00:08:05.285 00:08:05.285 libs: 00:08:05.285 log, kvargs, argparse, telemetry, eal, ring, rcu, mempool, 00:08:05.285 mbuf, net, meter, ethdev, pci, cmdline, metrics, hash, 00:08:05.285 timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, 00:08:05.285 distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, 00:08:05.285 ip_frag, jobstats, latencystats, lpm, member, pcapng, power, rawdev, 00:08:05.285 regexdev, mldev, rib, reorder, sched, security, stack, vhost, 00:08:05.285 ipsec, pdcp, fib, port, pdump, table, pipeline, graph, 00:08:05.285 node, 00:08:05.285 00:08:05.285 Message: 00:08:05.285 =============== 00:08:05.285 Drivers Enabled 00:08:05.285 =============== 00:08:05.285 00:08:05.285 common: 00:08:05.285 00:08:05.285 bus: 00:08:05.285 pci, vdev, 00:08:05.285 mempool: 00:08:05.285 ring, 00:08:05.285 dma: 00:08:05.285 00:08:05.285 net: 00:08:05.285 i40e, 00:08:05.285 raw: 00:08:05.285 00:08:05.285 crypto: 00:08:05.285 00:08:05.285 compress: 00:08:05.285 00:08:05.285 regex: 00:08:05.285 00:08:05.285 ml: 00:08:05.285 00:08:05.285 vdpa: 00:08:05.285 00:08:05.285 event: 00:08:05.285 00:08:05.285 baseband: 00:08:05.285 00:08:05.285 gpu: 00:08:05.285 00:08:05.285 00:08:05.285 Message: 00:08:05.285 ================= 00:08:05.285 Content Skipped 00:08:05.285 ================= 00:08:05.285 00:08:05.285 apps: 00:08:05.285 00:08:05.285 libs: 00:08:05.285 00:08:05.285 drivers: 00:08:05.285 common/cpt: not in enabled drivers build config 00:08:05.285 common/dpaax: not in enabled drivers build config 00:08:05.285 common/iavf: not in enabled drivers build config 00:08:05.285 common/idpf: not in enabled drivers build config 00:08:05.285 common/ionic: not in enabled drivers build config 00:08:05.285 common/mvep: not in enabled drivers build config 00:08:05.285 common/octeontx: not in enabled drivers build config 00:08:05.285 bus/auxiliary: not in enabled drivers build config 00:08:05.285 bus/cdx: not in enabled drivers build config 00:08:05.285 bus/dpaa: not in enabled drivers build config 00:08:05.285 bus/fslmc: not in enabled drivers build config 00:08:05.285 bus/ifpga: not in enabled drivers build config 00:08:05.285 bus/platform: not in enabled drivers build config 00:08:05.285 bus/uacce: not in enabled drivers build config 00:08:05.285 bus/vmbus: not in enabled drivers build config 00:08:05.285 common/cnxk: not in enabled drivers build config 00:08:05.285 common/mlx5: not in enabled drivers build config 00:08:05.285 common/nfp: not in enabled drivers build config 00:08:05.285 common/nitrox: not in enabled drivers build config 00:08:05.285 common/qat: not in enabled drivers build config 00:08:05.285 common/sfc_efx: not in enabled drivers build config 00:08:05.285 mempool/bucket: not in enabled drivers build config 00:08:05.285 mempool/cnxk: not in enabled drivers build config 00:08:05.285 mempool/dpaa: not in enabled drivers build config 00:08:05.285 mempool/dpaa2: not in enabled drivers build config 00:08:05.285 mempool/octeontx: not in enabled drivers build config 00:08:05.285 mempool/stack: not in enabled drivers build config 00:08:05.285 dma/cnxk: not in enabled drivers build config 00:08:05.285 dma/dpaa: not in enabled drivers build config 00:08:05.285 dma/dpaa2: not in enabled drivers build config 00:08:05.285 dma/hisilicon: not in enabled drivers build config 00:08:05.285 dma/idxd: not in enabled drivers build config 00:08:05.285 dma/ioat: not in enabled drivers build config 00:08:05.285 dma/skeleton: not in enabled drivers build config 00:08:05.285 net/af_packet: not in enabled drivers build config 00:08:05.285 net/af_xdp: not in enabled drivers build config 00:08:05.285 net/ark: not in enabled drivers build config 00:08:05.285 net/atlantic: not in enabled drivers build config 00:08:05.285 net/avp: not in enabled drivers build config 00:08:05.285 net/axgbe: not in enabled drivers build config 00:08:05.285 net/bnx2x: not in enabled drivers build config 00:08:05.285 net/bnxt: not in enabled drivers build config 00:08:05.285 net/bonding: not in enabled drivers build config 00:08:05.285 net/cnxk: not in enabled drivers build config 00:08:05.285 net/cpfl: not in enabled drivers build config 00:08:05.285 net/cxgbe: not in enabled drivers build config 00:08:05.285 net/dpaa: not in enabled drivers build config 00:08:05.285 net/dpaa2: not in enabled drivers build config 00:08:05.285 net/e1000: not in enabled drivers build config 00:08:05.285 net/ena: not in enabled drivers build config 00:08:05.285 net/enetc: not in enabled drivers build config 00:08:05.285 net/enetfec: not in enabled drivers build config 00:08:05.285 net/enic: not in enabled drivers build config 00:08:05.285 net/failsafe: not in enabled drivers build config 00:08:05.285 net/fm10k: not in enabled drivers build config 00:08:05.285 net/gve: not in enabled drivers build config 00:08:05.285 net/hinic: not in enabled drivers build config 00:08:05.285 net/hns3: not in enabled drivers build config 00:08:05.285 net/iavf: not in enabled drivers build config 00:08:05.285 net/ice: not in enabled drivers build config 00:08:05.285 net/idpf: not in enabled drivers build config 00:08:05.285 net/igc: not in enabled drivers build config 00:08:05.285 net/ionic: not in enabled drivers build config 00:08:05.285 net/ipn3ke: not in enabled drivers build config 00:08:05.285 net/ixgbe: not in enabled drivers build config 00:08:05.285 net/mana: not in enabled drivers build config 00:08:05.285 net/memif: not in enabled drivers build config 00:08:05.285 net/mlx4: not in enabled drivers build config 00:08:05.285 net/mlx5: not in enabled drivers build config 00:08:05.285 net/mvneta: not in enabled drivers build config 00:08:05.285 net/mvpp2: not in enabled drivers build config 00:08:05.285 net/netvsc: not in enabled drivers build config 00:08:05.285 net/nfb: not in enabled drivers build config 00:08:05.285 net/nfp: not in enabled drivers build config 00:08:05.285 net/ngbe: not in enabled drivers build config 00:08:05.285 net/null: not in enabled drivers build config 00:08:05.285 net/octeontx: not in enabled drivers build config 00:08:05.285 net/octeon_ep: not in enabled drivers build config 00:08:05.285 net/pcap: not in enabled drivers build config 00:08:05.285 net/pfe: not in enabled drivers build config 00:08:05.285 net/qede: not in enabled drivers build config 00:08:05.285 net/ring: not in enabled drivers build config 00:08:05.285 net/sfc: not in enabled drivers build config 00:08:05.285 net/softnic: not in enabled drivers build config 00:08:05.285 net/tap: not in enabled drivers build config 00:08:05.285 net/thunderx: not in enabled drivers build config 00:08:05.285 net/txgbe: not in enabled drivers build config 00:08:05.285 net/vdev_netvsc: not in enabled drivers build config 00:08:05.285 net/vhost: not in enabled drivers build config 00:08:05.285 net/virtio: not in enabled drivers build config 00:08:05.285 net/vmxnet3: not in enabled drivers build config 00:08:05.285 raw/cnxk_bphy: not in enabled drivers build config 00:08:05.285 raw/cnxk_gpio: not in enabled drivers build config 00:08:05.285 raw/dpaa2_cmdif: not in enabled drivers build config 00:08:05.285 raw/ifpga: not in enabled drivers build config 00:08:05.285 raw/ntb: not in enabled drivers build config 00:08:05.285 raw/skeleton: not in enabled drivers build config 00:08:05.285 crypto/armv8: not in enabled drivers build config 00:08:05.285 crypto/bcmfs: not in enabled drivers build config 00:08:05.285 crypto/caam_jr: not in enabled drivers build config 00:08:05.285 crypto/ccp: not in enabled drivers build config 00:08:05.285 crypto/cnxk: not in enabled drivers build config 00:08:05.285 crypto/dpaa_sec: not in enabled drivers build config 00:08:05.285 crypto/dpaa2_sec: not in enabled drivers build config 00:08:05.285 crypto/ipsec_mb: not in enabled drivers build config 00:08:05.285 crypto/mlx5: not in enabled drivers build config 00:08:05.285 crypto/mvsam: not in enabled drivers build config 00:08:05.285 crypto/nitrox: not in enabled drivers build config 00:08:05.286 crypto/null: not in enabled drivers build config 00:08:05.286 crypto/octeontx: not in enabled drivers build config 00:08:05.286 crypto/openssl: not in enabled drivers build config 00:08:05.286 crypto/scheduler: not in enabled drivers build config 00:08:05.286 crypto/uadk: not in enabled drivers build config 00:08:05.286 crypto/virtio: not in enabled drivers build config 00:08:05.286 compress/isal: not in enabled drivers build config 00:08:05.286 compress/mlx5: not in enabled drivers build config 00:08:05.286 compress/nitrox: not in enabled drivers build config 00:08:05.286 compress/octeontx: not in enabled drivers build config 00:08:05.286 compress/zlib: not in enabled drivers build config 00:08:05.286 regex/mlx5: not in enabled drivers build config 00:08:05.286 regex/cn9k: not in enabled drivers build config 00:08:05.286 ml/cnxk: not in enabled drivers build config 00:08:05.286 vdpa/ifc: not in enabled drivers build config 00:08:05.286 vdpa/mlx5: not in enabled drivers build config 00:08:05.286 vdpa/nfp: not in enabled drivers build config 00:08:05.286 vdpa/sfc: not in enabled drivers build config 00:08:05.286 event/cnxk: not in enabled drivers build config 00:08:05.286 event/dlb2: not in enabled drivers build config 00:08:05.286 event/dpaa: not in enabled drivers build config 00:08:05.286 event/dpaa2: not in enabled drivers build config 00:08:05.286 event/dsw: not in enabled drivers build config 00:08:05.286 event/opdl: not in enabled drivers build config 00:08:05.286 event/skeleton: not in enabled drivers build config 00:08:05.286 event/sw: not in enabled drivers build config 00:08:05.286 event/octeontx: not in enabled drivers build config 00:08:05.286 baseband/acc: not in enabled drivers build config 00:08:05.286 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:08:05.286 baseband/fpga_lte_fec: not in enabled drivers build config 00:08:05.286 baseband/la12xx: not in enabled drivers build config 00:08:05.286 baseband/null: not in enabled drivers build config 00:08:05.286 baseband/turbo_sw: not in enabled drivers build config 00:08:05.286 gpu/cuda: not in enabled drivers build config 00:08:05.286 00:08:05.286 00:08:05.286 Build targets in project: 221 00:08:05.286 00:08:05.286 DPDK 24.07.0-rc0 00:08:05.286 00:08:05.286 User defined options 00:08:05.286 libdir : lib 00:08:05.286 prefix : /home/vagrant/spdk_repo/dpdk/build 00:08:05.286 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:08:05.286 c_link_args : 00:08:05.286 enable_docs : false 00:08:05.286 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:08:05.286 enable_kmods : false 00:08:05.286 machine : native 00:08:05.286 tests : false 00:08:05.286 00:08:05.286 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:08:05.286 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:08:05.286 13:26:18 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:08:05.544 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:08:05.544 [1/719] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:08:05.544 [2/719] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:08:05.801 [3/719] Linking static target lib/librte_kvargs.a 00:08:05.801 [4/719] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:08:05.801 [5/719] Compiling C object lib/librte_log.a.p/log_log.c.o 00:08:05.801 [6/719] Linking static target lib/librte_log.a 00:08:05.801 [7/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:08:06.059 [8/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:08:06.059 [9/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:08:06.059 [10/719] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:08:06.059 [11/719] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:08:06.059 [12/719] Linking static target lib/librte_argparse.a 00:08:06.059 [13/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:08:06.059 [14/719] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:08:06.059 [15/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:08:06.059 [16/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:08:06.317 [17/719] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:08:06.317 [18/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:08:06.576 [19/719] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:08:06.576 [20/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:08:06.576 [21/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:08:06.576 [22/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:08:06.576 [23/719] Linking target lib/librte_log.so.24.2 00:08:06.576 [24/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:08:06.835 [25/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:08:06.835 [26/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:08:06.835 [27/719] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:08:06.835 [28/719] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:08:06.835 [29/719] Linking static target lib/librte_telemetry.a 00:08:06.835 [30/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:08:06.835 [31/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:08:06.835 [32/719] Linking target lib/librte_argparse.so.24.2 00:08:06.835 [33/719] Linking target lib/librte_kvargs.so.24.2 00:08:07.094 [34/719] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:08:07.094 [35/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:08:07.094 [36/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:08:07.094 [37/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:08:07.352 [38/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:08:07.352 [39/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:08:07.352 [40/719] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:08:07.611 [41/719] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:08:07.611 [42/719] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:08:07.611 [43/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:08:07.611 [44/719] Linking target lib/librte_telemetry.so.24.2 00:08:07.611 [45/719] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:08:07.611 [46/719] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:08:07.611 [47/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:08:07.612 [48/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:08:07.612 [49/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:08:07.869 [50/719] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:08:07.869 [51/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:08:08.136 [52/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:08:08.136 [53/719] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:08:08.136 [54/719] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:08:08.136 [55/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:08:08.136 [56/719] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:08:08.396 [57/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:08:08.396 [58/719] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:08:08.396 [59/719] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:08:08.396 [60/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:08:08.396 [61/719] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:08:08.655 [62/719] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:08:08.655 [63/719] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:08:08.655 [64/719] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:08:08.655 [65/719] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:08:08.655 [66/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:08:08.655 [67/719] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:08:08.913 [68/719] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:08:08.913 [69/719] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:08:08.913 [70/719] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:08:08.913 [71/719] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:08:09.171 [72/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:08:09.429 [73/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:08:09.429 [74/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:08:09.429 [75/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:08:09.429 [76/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:08:09.429 [77/719] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:08:09.429 [78/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:08:09.429 [79/719] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:08:09.429 [80/719] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:08:09.687 [81/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:08:09.687 [82/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:08:09.687 [83/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:08:09.945 [84/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:08:09.945 [85/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:08:09.945 [86/719] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:08:09.945 [87/719] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:08:09.945 [88/719] Linking static target lib/librte_ring.a 00:08:09.945 [89/719] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:08:10.202 [90/719] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:08:10.202 [91/719] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:08:10.202 [92/719] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:08:10.460 [93/719] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:08:10.460 [94/719] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:08:10.460 [95/719] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:08:10.460 [96/719] Linking static target lib/librte_eal.a 00:08:10.719 [97/719] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:08:10.719 [98/719] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:08:10.719 [99/719] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:08:10.719 [100/719] Linking static target lib/librte_rcu.a 00:08:10.977 [101/719] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:08:10.977 [102/719] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:08:10.977 [103/719] Linking static target lib/librte_mempool.a 00:08:10.977 [104/719] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:08:10.977 [105/719] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:08:10.977 [106/719] Linking static target lib/net/libnet_crc_avx512_lib.a 00:08:11.233 [107/719] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:08:11.233 [108/719] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:08:11.233 [109/719] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:08:11.233 [110/719] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:08:11.233 [111/719] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:08:11.512 [112/719] Linking static target lib/librte_mbuf.a 00:08:11.512 [113/719] Linking static target lib/librte_net.a 00:08:11.512 [114/719] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:08:11.512 [115/719] Linking static target lib/librte_meter.a 00:08:11.807 [116/719] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:08:11.807 [117/719] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:08:11.807 [118/719] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:08:11.807 [119/719] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:08:11.807 [120/719] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:08:12.066 [121/719] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:08:12.066 [122/719] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:08:12.066 [123/719] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:08:12.631 [124/719] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:08:12.631 [125/719] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:08:12.889 [126/719] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:08:12.889 [127/719] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:08:12.889 [128/719] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:08:12.889 [129/719] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:08:13.148 [130/719] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:08:13.148 [131/719] Linking static target lib/librte_pci.a 00:08:13.148 [132/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:08:13.148 [133/719] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:08:13.148 [134/719] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:08:13.406 [135/719] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:08:13.406 [136/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:08:13.406 [137/719] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:13.406 [138/719] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:08:13.406 [139/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:08:13.406 [140/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:08:13.406 [141/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:08:13.406 [142/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:08:13.699 [143/719] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:08:13.699 [144/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:08:13.699 [145/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:08:13.699 [146/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:08:13.699 [147/719] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:08:13.958 [148/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:08:13.958 [149/719] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:08:13.958 [150/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:08:13.958 [151/719] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:08:13.958 [152/719] Linking static target lib/librte_cmdline.a 00:08:13.958 [153/719] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:08:13.958 [154/719] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:08:13.958 [155/719] Linking static target lib/librte_metrics.a 00:08:14.216 [156/719] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:08:14.216 [157/719] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:08:14.781 [158/719] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:08:14.781 [159/719] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:08:14.781 [160/719] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:08:14.781 [161/719] Linking static target lib/librte_timer.a 00:08:15.039 [162/719] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:08:15.297 [163/719] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:08:15.297 [164/719] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:08:15.297 [165/719] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:08:15.556 [166/719] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:08:15.556 [167/719] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:08:16.186 [168/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:08:16.186 [169/719] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:08:16.186 [170/719] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:08:16.186 [171/719] Linking static target lib/librte_bitratestats.a 00:08:16.186 [172/719] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:08:16.186 [173/719] Linking static target lib/librte_bbdev.a 00:08:16.444 [174/719] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:08:16.444 [175/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:08:16.444 [176/719] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:08:16.702 [177/719] Linking static target lib/librte_hash.a 00:08:16.702 [178/719] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:08:16.702 [179/719] Linking static target lib/librte_ethdev.a 00:08:16.702 [180/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:08:16.959 [181/719] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:08:16.959 [182/719] Linking static target lib/acl/libavx2_tmp.a 00:08:17.216 [183/719] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:17.216 [184/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:08:17.216 [185/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:08:17.474 [186/719] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:08:17.474 [187/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:08:17.474 [188/719] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:08:17.732 [189/719] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:08:17.732 [190/719] Linking static target lib/librte_cfgfile.a 00:08:17.732 [191/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:08:17.991 [192/719] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:08:17.991 [193/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:08:17.991 [194/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:08:18.249 [195/719] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:08:18.249 [196/719] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:08:18.507 [197/719] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:08:18.507 [198/719] Linking static target lib/librte_compressdev.a 00:08:18.507 [199/719] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:08:18.507 [200/719] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:08:18.507 [201/719] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:08:18.507 [202/719] Linking static target lib/librte_acl.a 00:08:18.507 [203/719] Linking static target lib/librte_bpf.a 00:08:18.765 [204/719] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:08:18.765 [205/719] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:08:19.023 [206/719] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:08:19.023 [207/719] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:08:19.023 [208/719] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:08:19.023 [209/719] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:08:19.023 [210/719] Linking static target lib/librte_distributor.a 00:08:19.281 [211/719] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:19.281 [212/719] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:08:19.281 [213/719] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:08:19.540 [214/719] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:08:19.540 [215/719] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:08:19.540 [216/719] Linking static target lib/librte_dmadev.a 00:08:19.797 [217/719] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:08:20.405 [218/719] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:08:20.405 [219/719] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:08:20.405 [220/719] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:20.405 [221/719] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:08:20.405 [222/719] Linking static target lib/librte_efd.a 00:08:20.405 [223/719] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:08:20.663 [224/719] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:08:20.663 [225/719] Linking target lib/librte_eal.so.24.2 00:08:20.663 [226/719] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:08:20.663 [227/719] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:08:20.663 [228/719] Linking target lib/librte_ring.so.24.2 00:08:20.663 [229/719] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:08:20.921 [230/719] Linking target lib/librte_meter.so.24.2 00:08:20.921 [231/719] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:08:20.921 [232/719] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:08:20.921 [233/719] Linking target lib/librte_pci.so.24.2 00:08:20.921 [234/719] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:08:20.921 [235/719] Linking target lib/librte_rcu.so.24.2 00:08:20.921 [236/719] Linking target lib/librte_mempool.so.24.2 00:08:20.921 [237/719] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:08:21.179 [238/719] Linking target lib/librte_timer.so.24.2 00:08:21.179 [239/719] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:08:21.179 [240/719] Linking target lib/librte_acl.so.24.2 00:08:21.179 [241/719] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:08:21.179 [242/719] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:08:21.179 [243/719] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:08:21.179 [244/719] Linking target lib/librte_cfgfile.so.24.2 00:08:21.179 [245/719] Linking static target lib/librte_cryptodev.a 00:08:21.179 [246/719] Linking target lib/librte_mbuf.so.24.2 00:08:21.179 [247/719] Linking target lib/librte_dmadev.so.24.2 00:08:21.179 [248/719] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:08:21.436 [249/719] Linking static target lib/librte_dispatcher.a 00:08:21.436 [250/719] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:08:21.436 [251/719] Linking static target lib/librte_gpudev.a 00:08:21.436 [252/719] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:08:21.436 [253/719] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:08:21.436 [254/719] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:08:21.436 [255/719] Linking target lib/librte_bbdev.so.24.2 00:08:21.436 [256/719] Linking target lib/librte_net.so.24.2 00:08:21.436 [257/719] Linking target lib/librte_compressdev.so.24.2 00:08:21.436 [258/719] Linking target lib/librte_distributor.so.24.2 00:08:21.694 [259/719] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:08:21.694 [260/719] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:08:21.694 [261/719] Linking target lib/librte_cmdline.so.24.2 00:08:21.694 [262/719] Linking target lib/librte_hash.so.24.2 00:08:21.952 [263/719] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:08:21.952 [264/719] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:08:21.952 [265/719] Linking target lib/librte_efd.so.24.2 00:08:21.952 [266/719] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:08:22.210 [267/719] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:08:22.210 [268/719] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:08:22.210 [269/719] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:08:22.481 [270/719] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:22.481 [271/719] Linking target lib/librte_gpudev.so.24.2 00:08:22.481 [272/719] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:08:22.739 [273/719] Linking static target lib/librte_eventdev.a 00:08:22.739 [274/719] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:08:22.739 [275/719] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:08:22.739 [276/719] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:08:22.739 [277/719] Linking static target lib/librte_gro.a 00:08:22.739 [278/719] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:08:22.739 [279/719] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:08:22.739 [280/719] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:08:22.997 [281/719] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:08:22.997 [282/719] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:22.997 [283/719] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:08:22.997 [284/719] Linking target lib/librte_cryptodev.so.24.2 00:08:23.255 [285/719] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:08:23.255 [286/719] Linking static target lib/librte_gso.a 00:08:23.255 [287/719] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:08:23.255 [288/719] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:08:23.513 [289/719] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:08:23.513 [290/719] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:08:23.513 [291/719] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:08:23.513 [292/719] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:08:23.772 [293/719] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:08:23.772 [294/719] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:08:23.772 [295/719] Linking static target lib/librte_jobstats.a 00:08:23.772 [296/719] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:08:23.772 [297/719] Linking static target lib/librte_ip_frag.a 00:08:24.030 [298/719] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:08:24.030 [299/719] Linking static target lib/librte_latencystats.a 00:08:24.030 [300/719] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:08:24.030 [301/719] Linking static target lib/member/libsketch_avx512_tmp.a 00:08:24.030 [302/719] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.030 [303/719] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:08:24.030 [304/719] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.030 [305/719] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:08:24.287 [306/719] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.287 [307/719] Linking target lib/librte_jobstats.so.24.2 00:08:24.287 [308/719] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.287 [309/719] Linking target lib/librte_ethdev.so.24.2 00:08:24.287 [310/719] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:08:24.287 [311/719] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:08:24.546 [312/719] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:08:24.546 [313/719] Linking target lib/librte_metrics.so.24.2 00:08:24.546 [314/719] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:08:24.803 [315/719] Linking target lib/librte_bitratestats.so.24.2 00:08:24.803 [316/719] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:08:24.803 [317/719] Linking target lib/librte_bpf.so.24.2 00:08:24.803 [318/719] Linking target lib/librte_gro.so.24.2 00:08:24.803 [319/719] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:08:24.803 [320/719] Linking target lib/librte_gso.so.24.2 00:08:25.062 [321/719] Linking target lib/librte_ip_frag.so.24.2 00:08:25.062 [322/719] Linking target lib/librte_latencystats.so.24.2 00:08:25.062 [323/719] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:08:25.062 [324/719] Linking static target lib/librte_pcapng.a 00:08:25.062 [325/719] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:08:25.062 [326/719] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:08:25.062 [327/719] Linking static target lib/librte_lpm.a 00:08:25.062 [328/719] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:08:25.062 [329/719] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:08:25.062 [330/719] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:08:25.062 [331/719] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:08:25.062 [332/719] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:08:25.320 [333/719] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:08:25.320 [334/719] Linking target lib/librte_pcapng.so.24.2 00:08:25.578 [335/719] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:08:25.578 [336/719] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:08:25.578 [337/719] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:08:25.578 [338/719] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:08:25.578 [339/719] Linking target lib/librte_lpm.so.24.2 00:08:25.578 [340/719] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:08:25.578 [341/719] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:08:25.836 [342/719] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:08:25.836 [343/719] Linking static target lib/librte_rawdev.a 00:08:25.836 [344/719] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:08:25.836 [345/719] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:08:25.836 [346/719] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:26.094 [347/719] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:08:26.094 [348/719] Linking target lib/librte_eventdev.so.24.2 00:08:26.094 [349/719] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:08:26.094 [350/719] Linking static target lib/librte_regexdev.a 00:08:26.094 [351/719] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:08:26.094 [352/719] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:08:26.351 [353/719] Linking target lib/librte_dispatcher.so.24.2 00:08:26.351 [354/719] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:08:26.351 [355/719] Linking static target lib/librte_member.a 00:08:26.351 [356/719] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:08:26.351 [357/719] Linking static target lib/librte_power.a 00:08:26.609 [358/719] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:26.609 [359/719] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:08:26.609 [360/719] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:08:26.609 [361/719] Linking target lib/librte_rawdev.so.24.2 00:08:26.922 [362/719] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:08:26.922 [363/719] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:08:26.922 [364/719] Linking static target lib/librte_mldev.a 00:08:26.922 [365/719] Linking target lib/librte_member.so.24.2 00:08:26.922 [366/719] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:08:26.922 [367/719] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:08:26.922 [368/719] Linking static target lib/librte_reorder.a 00:08:27.180 [369/719] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:08:27.180 [370/719] Linking static target lib/librte_rib.a 00:08:27.180 [371/719] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:27.180 [372/719] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:08:27.180 [373/719] Linking target lib/librte_regexdev.so.24.2 00:08:27.180 [374/719] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:08:27.180 [375/719] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:08:27.180 [376/719] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:08:27.180 [377/719] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:08:27.439 [378/719] Linking target lib/librte_power.so.24.2 00:08:27.439 [379/719] Linking target lib/librte_reorder.so.24.2 00:08:27.439 [380/719] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:08:27.439 [381/719] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:08:27.696 [382/719] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:08:27.696 [383/719] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:08:27.696 [384/719] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:08:27.696 [385/719] Linking static target lib/librte_security.a 00:08:27.696 [386/719] Linking target lib/librte_rib.so.24.2 00:08:27.696 [387/719] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:08:27.696 [388/719] Linking static target lib/librte_stack.a 00:08:27.988 [389/719] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:08:27.988 [390/719] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:08:27.988 [391/719] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:08:27.988 [392/719] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:08:27.988 [393/719] Linking target lib/librte_stack.so.24.2 00:08:28.247 [394/719] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:08:28.247 [395/719] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:08:28.247 [396/719] Linking static target lib/librte_sched.a 00:08:28.247 [397/719] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:08:28.247 [398/719] Linking target lib/librte_security.so.24.2 00:08:28.505 [399/719] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:08:28.763 [400/719] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:08:28.763 [401/719] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:08:28.763 [402/719] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:08:29.021 [403/719] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:29.021 [404/719] Linking target lib/librte_sched.so.24.2 00:08:29.021 [405/719] Linking target lib/librte_mldev.so.24.2 00:08:29.021 [406/719] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:08:29.021 [407/719] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:08:29.279 [408/719] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:08:29.537 [409/719] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:08:29.537 [410/719] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:08:29.794 [411/719] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:08:29.794 [412/719] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:08:29.794 [413/719] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:08:29.794 [414/719] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:08:30.053 [415/719] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:08:30.312 [416/719] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:08:30.312 [417/719] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:08:30.312 [418/719] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:08:30.312 [419/719] Linking static target lib/librte_ipsec.a 00:08:30.569 [420/719] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:08:30.569 [421/719] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:08:30.569 [422/719] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:08:30.569 [423/719] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:08:30.827 [424/719] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:08:30.827 [425/719] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:08:30.827 [426/719] Linking target lib/librte_ipsec.so.24.2 00:08:31.086 [427/719] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:08:31.086 [428/719] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:08:31.086 [429/719] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:08:31.345 [430/719] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:08:31.345 [431/719] Linking static target lib/librte_fib.a 00:08:31.345 [432/719] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:08:31.618 [433/719] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:08:31.618 [434/719] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:08:31.618 [435/719] Linking static target lib/librte_pdcp.a 00:08:31.618 [436/719] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:08:31.876 [437/719] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:08:31.876 [438/719] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:08:31.876 [439/719] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:08:31.876 [440/719] Linking target lib/librte_fib.so.24.2 00:08:32.135 [441/719] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:08:32.135 [442/719] Linking target lib/librte_pdcp.so.24.2 00:08:32.394 [443/719] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:08:32.394 [444/719] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:08:32.651 [445/719] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:08:32.651 [446/719] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:08:32.651 [447/719] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:08:32.651 [448/719] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:08:32.910 [449/719] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:08:32.910 [450/719] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:08:33.169 [451/719] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:08:33.169 [452/719] Linking static target lib/librte_port.a 00:08:33.169 [453/719] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:08:33.427 [454/719] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:08:33.427 [455/719] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:08:33.427 [456/719] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:08:33.427 [457/719] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:08:33.685 [458/719] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:08:33.685 [459/719] Linking static target lib/librte_pdump.a 00:08:33.685 [460/719] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:08:33.943 [461/719] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:08:33.943 [462/719] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:08:33.943 [463/719] Linking target lib/librte_pdump.so.24.2 00:08:33.943 [464/719] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:08:33.943 [465/719] Linking target lib/librte_port.so.24.2 00:08:34.201 [466/719] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:08:34.201 [467/719] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:08:34.459 [468/719] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:08:34.459 [469/719] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:08:34.459 [470/719] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:08:34.718 [471/719] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:08:34.718 [472/719] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:08:34.718 [473/719] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:08:34.976 [474/719] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:08:34.976 [475/719] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:08:34.976 [476/719] Linking static target lib/librte_table.a 00:08:34.976 [477/719] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:08:35.233 [478/719] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:08:35.492 [479/719] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:08:35.750 [480/719] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:08:36.008 [481/719] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:08:36.008 [482/719] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:08:36.008 [483/719] Linking target lib/librte_table.so.24.2 00:08:36.008 [484/719] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:08:36.008 [485/719] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:08:36.008 [486/719] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:08:36.356 [487/719] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:08:36.614 [488/719] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:08:36.614 [489/719] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:08:36.614 [490/719] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:08:36.614 [491/719] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:08:36.614 [492/719] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:08:36.873 [493/719] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:08:37.130 [494/719] Linking static target lib/librte_graph.a 00:08:37.130 [495/719] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:08:37.130 [496/719] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:08:37.130 [497/719] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:08:37.130 [498/719] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:08:37.389 [499/719] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:08:37.647 [500/719] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:08:37.905 [501/719] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:08:37.905 [502/719] Linking target lib/librte_graph.so.24.2 00:08:37.905 [503/719] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:08:38.163 [504/719] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:08:38.163 [505/719] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:08:38.163 [506/719] Compiling C object lib/librte_node.a.p/node_null.c.o 00:08:38.163 [507/719] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:08:38.420 [508/719] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:08:38.420 [509/719] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:08:38.420 [510/719] Compiling C object lib/librte_node.a.p/node_log.c.o 00:08:38.420 [511/719] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:08:38.678 [512/719] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:08:38.678 [513/719] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:08:38.937 [514/719] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:08:38.937 [515/719] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:08:38.937 [516/719] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:08:38.937 [517/719] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:08:38.937 [518/719] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:08:39.196 [519/719] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:08:39.196 [520/719] Linking static target lib/librte_node.a 00:08:39.196 [521/719] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:08:39.454 [522/719] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:08:39.454 [523/719] Linking static target drivers/libtmp_rte_bus_pci.a 00:08:39.454 [524/719] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:08:39.454 [525/719] Linking static target drivers/libtmp_rte_bus_vdev.a 00:08:39.713 [526/719] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:08:39.713 [527/719] Linking target lib/librte_node.so.24.2 00:08:39.713 [528/719] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:08:39.713 [529/719] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:39.713 [530/719] Linking static target drivers/librte_bus_pci.a 00:08:39.713 [531/719] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:08:39.713 [532/719] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:39.713 [533/719] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:39.713 [534/719] Linking static target drivers/librte_bus_vdev.a 00:08:39.971 [535/719] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:08:39.971 [536/719] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:40.230 [537/719] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:40.230 [538/719] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:08:40.230 [539/719] Linking target drivers/librte_bus_vdev.so.24.2 00:08:40.230 [540/719] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:08:40.230 [541/719] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:40.230 [542/719] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:08:40.230 [543/719] Linking target drivers/librte_bus_pci.so.24.2 00:08:40.488 [544/719] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:08:40.488 [545/719] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:08:40.488 [546/719] Linking static target drivers/libtmp_rte_mempool_ring.a 00:08:40.746 [547/719] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:08:40.746 [548/719] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:08:40.746 [549/719] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:40.746 [550/719] Linking static target drivers/librte_mempool_ring.a 00:08:41.004 [551/719] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:08:41.004 [552/719] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:41.004 [553/719] Linking target drivers/librte_mempool_ring.so.24.2 00:08:41.263 [554/719] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:08:41.560 [555/719] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:08:41.560 [556/719] Linking static target drivers/net/i40e/base/libi40e_base.a 00:08:42.127 [557/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:08:42.385 [558/719] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:08:42.385 [559/719] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:08:42.385 [560/719] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:08:42.644 [561/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:08:42.644 [562/719] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:08:42.644 [563/719] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:08:42.903 [564/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:08:42.903 [565/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:08:43.161 [566/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:08:43.161 [567/719] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:08:43.420 [568/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:08:43.420 [569/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:08:43.676 [570/719] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:08:43.971 [571/719] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:08:43.971 [572/719] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:08:43.971 [573/719] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:08:44.559 [574/719] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:08:44.559 [575/719] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:08:44.559 [576/719] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:08:44.559 [577/719] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:08:45.124 [578/719] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:08:45.124 [579/719] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:08:45.124 [580/719] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:08:45.124 [581/719] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:08:45.395 [582/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:08:45.395 [583/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:08:45.654 [584/719] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:08:45.913 [585/719] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:08:45.913 [586/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:08:46.170 [587/719] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:08:46.170 [588/719] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:08:46.170 [589/719] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:08:46.170 [590/719] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:08:46.170 [591/719] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:08:46.428 [592/719] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:08:46.428 [593/719] Linking static target drivers/libtmp_rte_net_i40e.a 00:08:46.702 [594/719] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:08:46.702 [595/719] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:08:46.959 [596/719] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:08:46.959 [597/719] Linking static target drivers/librte_net_i40e.a 00:08:46.959 [598/719] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:08:46.959 [599/719] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:08:47.525 [600/719] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:08:47.525 [601/719] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:08:47.783 [602/719] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:08:47.783 [603/719] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:08:47.783 [604/719] Linking static target lib/librte_vhost.a 00:08:48.040 [605/719] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:08:48.040 [606/719] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:08:48.040 [607/719] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:08:48.040 [608/719] Linking target drivers/librte_net_i40e.so.24.2 00:08:48.296 [609/719] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:08:48.296 [610/719] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:08:48.296 [611/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:08:48.554 [612/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:08:48.812 [613/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:08:49.075 [614/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:08:49.075 [615/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:08:49.075 [616/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:08:49.383 [617/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:08:49.640 [618/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:08:49.640 [619/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:08:49.640 [620/719] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:08:49.640 [621/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:08:49.640 [622/719] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:08:49.899 [623/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:08:49.899 [624/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:08:49.899 [625/719] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:08:49.899 [626/719] Linking target lib/librte_vhost.so.24.2 00:08:50.157 [627/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:08:50.157 [628/719] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:08:50.414 [629/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:08:50.672 [630/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:08:50.929 [631/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:08:51.188 [632/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:08:51.447 [633/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:08:51.447 [634/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:08:51.705 [635/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:08:51.705 [636/719] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:08:51.705 [637/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:08:51.705 [638/719] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:08:51.705 [639/719] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:08:51.705 [640/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:08:51.973 [641/719] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:08:51.973 [642/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:08:52.251 [643/719] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:08:52.251 [644/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:08:52.251 [645/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:08:52.251 [646/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:08:52.251 [647/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:08:52.251 [648/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:08:52.817 [649/719] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:08:52.817 [650/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:08:52.817 [651/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:08:52.817 [652/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:08:52.817 [653/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:08:53.076 [654/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:08:53.334 [655/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:08:53.334 [656/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:08:53.334 [657/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:08:53.592 [658/719] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:08:53.592 [659/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:08:53.592 [660/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:08:53.592 [661/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:08:53.592 [662/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:08:53.850 [663/719] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:08:53.850 [664/719] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:08:54.108 [665/719] Linking static target lib/librte_pipeline.a 00:08:54.108 [666/719] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:08:54.108 [667/719] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:08:54.366 [668/719] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:08:54.366 [669/719] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:08:54.366 [670/719] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:08:54.623 [671/719] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:08:54.623 [672/719] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:08:54.623 [673/719] Linking target app/dpdk-graph 00:08:54.881 [674/719] Linking target app/dpdk-dumpcap 00:08:54.881 [675/719] Linking target app/dpdk-test-acl 00:08:55.139 [676/719] Linking target app/dpdk-proc-info 00:08:55.139 [677/719] Linking target app/dpdk-test-bbdev 00:08:55.139 [678/719] Linking target app/dpdk-pdump 00:08:55.398 [679/719] Linking target app/dpdk-test-compress-perf 00:08:55.398 [680/719] Linking target app/dpdk-test-crypto-perf 00:08:55.398 [681/719] Linking target app/dpdk-test-cmdline 00:08:55.398 [682/719] Linking target app/dpdk-test-dma-perf 00:08:55.965 [683/719] Linking target app/dpdk-test-fib 00:08:55.965 [684/719] Linking target app/dpdk-test-gpudev 00:08:55.965 [685/719] Linking target app/dpdk-test-flow-perf 00:08:55.965 [686/719] Linking target app/dpdk-test-mldev 00:08:55.965 [687/719] Linking target app/dpdk-test-eventdev 00:08:56.223 [688/719] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:08:56.223 [689/719] Linking target app/dpdk-test-pipeline 00:08:56.481 [690/719] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:08:56.739 [691/719] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:08:57.000 [692/719] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:08:57.260 [693/719] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:08:57.260 [694/719] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:08:57.260 [695/719] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:08:57.517 [696/719] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:08:57.775 [697/719] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:08:57.775 [698/719] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:08:58.033 [699/719] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:08:58.033 [700/719] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:08:58.033 [701/719] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:08:58.033 [702/719] Linking target lib/librte_pipeline.so.24.2 00:08:58.291 [703/719] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:08:58.548 [704/719] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:08:58.806 [705/719] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:08:58.806 [706/719] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:08:58.806 [707/719] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:08:58.806 [708/719] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:08:59.063 [709/719] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:08:59.063 [710/719] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:08:59.374 [711/719] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:08:59.374 [712/719] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:08:59.374 [713/719] Linking target app/dpdk-test-sad 00:08:59.646 [714/719] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:08:59.646 [715/719] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:08:59.646 [716/719] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:08:59.905 [717/719] Linking target app/dpdk-test-regex 00:09:00.162 [718/719] Linking target app/dpdk-test-security-perf 00:09:00.162 [719/719] Linking target app/dpdk-testpmd 00:09:00.162 13:27:13 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:09:00.419 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:09:00.419 [0/1] Installing files. 00:09:00.680 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.680 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.681 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.682 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.683 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:09:00.684 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:09:00.685 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:09:00.685 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_log.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_kvargs.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_argparse.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_telemetry.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_eal.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_rcu.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_mempool.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_mbuf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_net.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_meter.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_ethdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_cmdline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_metrics.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_hash.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_timer.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_acl.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_bbdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_bitratestats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_bpf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_cfgfile.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.685 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_compressdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_cryptodev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_distributor.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_dmadev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_efd.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_eventdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_dispatcher.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_gpudev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_gro.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_gso.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_ip_frag.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_jobstats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_latencystats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_lpm.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_member.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_pcapng.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_power.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_rawdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_regexdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_mldev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_rib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_reorder.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_sched.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_security.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_stack.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_vhost.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_ipsec.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_pdcp.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_fib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_port.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_pdump.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:00.686 Installing lib/librte_table.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:01.256 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:01.256 Installing lib/librte_pipeline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:01.256 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:01.256 Installing lib/librte_graph.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:01.256 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:01.256 Installing lib/librte_node.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:01.256 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:01.256 Installing drivers/librte_bus_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:09:01.256 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:01.256 Installing drivers/librte_bus_vdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:09:01.256 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:01.256 Installing drivers/librte_mempool_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:09:01.256 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:01.256 Installing drivers/librte_net_i40e.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:09:01.256 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.256 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.257 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.257 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.257 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.257 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.257 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.257 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.257 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.257 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.257 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.257 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.257 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.257 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.257 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.257 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.257 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.257 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.257 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.257 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.257 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.258 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.259 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.260 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:09:01.261 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:09:01.261 Installing symlink pointing to librte_log.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:09:01.261 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:09:01.261 Installing symlink pointing to librte_kvargs.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:09:01.261 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:09:01.261 Installing symlink pointing to librte_argparse.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.24 00:09:01.261 Installing symlink pointing to librte_argparse.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:09:01.261 Installing symlink pointing to librte_telemetry.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:09:01.261 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:09:01.261 Installing symlink pointing to librte_eal.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:09:01.261 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:09:01.261 Installing symlink pointing to librte_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:09:01.261 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:09:01.261 Installing symlink pointing to librte_rcu.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:09:01.261 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:09:01.261 Installing symlink pointing to librte_mempool.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:09:01.261 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:09:01.261 Installing symlink pointing to librte_mbuf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:09:01.261 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:09:01.261 Installing symlink pointing to librte_net.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:09:01.261 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:09:01.261 Installing symlink pointing to librte_meter.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:09:01.261 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:09:01.261 Installing symlink pointing to librte_ethdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:09:01.261 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:09:01.261 Installing symlink pointing to librte_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:09:01.261 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:09:01.261 Installing symlink pointing to librte_cmdline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:09:01.261 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:09:01.261 Installing symlink pointing to librte_metrics.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:09:01.261 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:09:01.261 Installing symlink pointing to librte_hash.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:09:01.261 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:09:01.261 Installing symlink pointing to librte_timer.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:09:01.261 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:09:01.261 Installing symlink pointing to librte_acl.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:09:01.261 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:09:01.261 Installing symlink pointing to librte_bbdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:09:01.261 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:09:01.261 Installing symlink pointing to librte_bitratestats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:09:01.261 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:09:01.261 Installing symlink pointing to librte_bpf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:09:01.261 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:09:01.261 Installing symlink pointing to librte_cfgfile.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:09:01.261 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:09:01.261 Installing symlink pointing to librte_compressdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:09:01.261 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:09:01.262 Installing symlink pointing to librte_cryptodev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:09:01.262 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:09:01.262 Installing symlink pointing to librte_distributor.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:09:01.262 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:09:01.262 Installing symlink pointing to librte_dmadev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:09:01.262 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:09:01.262 Installing symlink pointing to librte_efd.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:09:01.262 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:09:01.262 Installing symlink pointing to librte_eventdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:09:01.262 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:09:01.262 Installing symlink pointing to librte_dispatcher.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:09:01.262 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:09:01.262 Installing symlink pointing to librte_gpudev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:09:01.262 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:09:01.262 Installing symlink pointing to librte_gro.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:09:01.262 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:09:01.262 Installing symlink pointing to librte_gso.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:09:01.262 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:09:01.262 Installing symlink pointing to librte_ip_frag.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:09:01.262 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:09:01.262 Installing symlink pointing to librte_jobstats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:09:01.262 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:09:01.262 Installing symlink pointing to librte_latencystats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:09:01.262 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:09:01.262 Installing symlink pointing to librte_lpm.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:09:01.262 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:09:01.262 Installing symlink pointing to librte_member.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:09:01.262 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:09:01.262 Installing symlink pointing to librte_pcapng.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:09:01.262 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:09:01.262 Installing symlink pointing to librte_power.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:09:01.262 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:09:01.262 Installing symlink pointing to librte_rawdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:09:01.262 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:09:01.262 Installing symlink pointing to librte_regexdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:09:01.262 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:09:01.262 Installing symlink pointing to librte_mldev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:09:01.262 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:09:01.262 Installing symlink pointing to librte_rib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:09:01.262 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:09:01.262 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:09:01.262 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:09:01.262 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:09:01.262 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:09:01.262 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:09:01.262 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:09:01.262 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:09:01.262 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:09:01.262 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:09:01.262 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:09:01.262 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:09:01.262 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:09:01.262 Installing symlink pointing to librte_reorder.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:09:01.262 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:09:01.262 Installing symlink pointing to librte_sched.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:09:01.262 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:09:01.262 Installing symlink pointing to librte_security.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:09:01.262 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:09:01.262 Installing symlink pointing to librte_stack.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:09:01.262 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:09:01.262 Installing symlink pointing to librte_vhost.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:09:01.262 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:09:01.262 Installing symlink pointing to librte_ipsec.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:09:01.262 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:09:01.262 Installing symlink pointing to librte_pdcp.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:09:01.262 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:09:01.262 Installing symlink pointing to librte_fib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:09:01.262 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:09:01.262 Installing symlink pointing to librte_port.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:09:01.262 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:09:01.262 Installing symlink pointing to librte_pdump.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:09:01.262 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:09:01.262 Installing symlink pointing to librte_table.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:09:01.262 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:09:01.262 Installing symlink pointing to librte_pipeline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:09:01.262 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:09:01.262 Installing symlink pointing to librte_graph.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:09:01.262 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:09:01.262 Installing symlink pointing to librte_node.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:09:01.263 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:09:01.263 Installing symlink pointing to librte_bus_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:09:01.263 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:09:01.263 Installing symlink pointing to librte_bus_vdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:09:01.263 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:09:01.263 Installing symlink pointing to librte_mempool_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:09:01.263 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:09:01.263 Installing symlink pointing to librte_net_i40e.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:09:01.263 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:09:01.263 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:09:01.263 13:27:14 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:09:01.263 13:27:14 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:09:01.263 13:27:14 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:09:01.263 13:27:14 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:09:01.263 00:09:01.263 real 1m2.881s 00:09:01.263 user 7m17.413s 00:09:01.263 sys 1m28.587s 00:09:01.263 13:27:14 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:09:01.263 ************************************ 00:09:01.263 END TEST build_native_dpdk 00:09:01.263 ************************************ 00:09:01.263 13:27:14 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:09:01.521 13:27:14 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:09:01.521 13:27:14 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:09:01.521 13:27:14 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:09:01.521 13:27:14 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:09:01.522 13:27:14 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:09:01.522 13:27:14 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:09:01.522 13:27:14 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:09:01.522 13:27:14 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:09:01.522 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:09:01.780 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:09:01.780 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:09:01.780 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:02.038 Using 'verbs' RDMA provider 00:09:18.288 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:09:33.164 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:09:33.164 Creating mk/config.mk...done. 00:09:33.164 Creating mk/cc.flags.mk...done. 00:09:33.164 Type 'make' to build. 00:09:33.164 13:27:44 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:09:33.164 13:27:44 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:09:33.164 13:27:44 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:09:33.164 13:27:44 -- common/autotest_common.sh@10 -- $ set +x 00:09:33.164 ************************************ 00:09:33.164 START TEST make 00:09:33.164 ************************************ 00:09:33.164 13:27:44 make -- common/autotest_common.sh@1121 -- $ make -j10 00:09:33.164 make[1]: Nothing to be done for 'all'. 00:09:55.086 CC lib/ut/ut.o 00:09:55.086 CC lib/ut_mock/mock.o 00:09:55.086 CC lib/log/log_flags.o 00:09:55.086 CC lib/log/log_deprecated.o 00:09:55.086 CC lib/log/log.o 00:09:55.086 LIB libspdk_ut.a 00:09:55.086 LIB libspdk_ut_mock.a 00:09:55.086 SO libspdk_ut.so.2.0 00:09:55.086 SO libspdk_ut_mock.so.6.0 00:09:55.086 LIB libspdk_log.a 00:09:55.086 SO libspdk_log.so.7.0 00:09:55.086 SYMLINK libspdk_ut.so 00:09:55.086 SYMLINK libspdk_ut_mock.so 00:09:55.086 SYMLINK libspdk_log.so 00:09:55.086 CC lib/dma/dma.o 00:09:55.086 CC lib/ioat/ioat.o 00:09:55.086 CXX lib/trace_parser/trace.o 00:09:55.086 CC lib/util/base64.o 00:09:55.086 CC lib/util/bit_array.o 00:09:55.086 CC lib/util/crc16.o 00:09:55.086 CC lib/util/cpuset.o 00:09:55.086 CC lib/util/crc32.o 00:09:55.086 CC lib/util/crc32c.o 00:09:55.086 CC lib/vfio_user/host/vfio_user_pci.o 00:09:55.086 LIB libspdk_dma.a 00:09:55.086 CC lib/vfio_user/host/vfio_user.o 00:09:55.086 SO libspdk_dma.so.4.0 00:09:55.086 CC lib/util/crc32_ieee.o 00:09:55.086 CC lib/util/crc64.o 00:09:55.086 CC lib/util/dif.o 00:09:55.086 SYMLINK libspdk_dma.so 00:09:55.086 CC lib/util/fd.o 00:09:55.086 CC lib/util/file.o 00:09:55.086 CC lib/util/hexlify.o 00:09:55.086 LIB libspdk_ioat.a 00:09:55.086 CC lib/util/iov.o 00:09:55.086 SO libspdk_ioat.so.7.0 00:09:55.086 LIB libspdk_vfio_user.a 00:09:55.086 CC lib/util/math.o 00:09:55.086 CC lib/util/pipe.o 00:09:55.086 CC lib/util/strerror_tls.o 00:09:55.086 CC lib/util/string.o 00:09:55.086 SYMLINK libspdk_ioat.so 00:09:55.086 SO libspdk_vfio_user.so.5.0 00:09:55.086 SYMLINK libspdk_vfio_user.so 00:09:55.086 CC lib/util/uuid.o 00:09:55.086 CC lib/util/fd_group.o 00:09:55.086 CC lib/util/xor.o 00:09:55.086 CC lib/util/zipf.o 00:09:55.086 LIB libspdk_util.a 00:09:55.086 SO libspdk_util.so.9.0 00:09:55.343 LIB libspdk_trace_parser.a 00:09:55.343 SO libspdk_trace_parser.so.5.0 00:09:55.343 SYMLINK libspdk_util.so 00:09:55.343 SYMLINK libspdk_trace_parser.so 00:09:55.602 CC lib/idxd/idxd.o 00:09:55.602 CC lib/idxd/idxd_user.o 00:09:55.602 CC lib/vmd/led.o 00:09:55.602 CC lib/vmd/vmd.o 00:09:55.602 CC lib/rdma/common.o 00:09:55.602 CC lib/json/json_parse.o 00:09:55.602 CC lib/rdma/rdma_verbs.o 00:09:55.602 CC lib/json/json_util.o 00:09:55.602 CC lib/env_dpdk/env.o 00:09:55.602 CC lib/conf/conf.o 00:09:55.863 CC lib/json/json_write.o 00:09:55.863 CC lib/env_dpdk/memory.o 00:09:55.863 CC lib/env_dpdk/pci.o 00:09:55.863 CC lib/env_dpdk/init.o 00:09:55.863 LIB libspdk_rdma.a 00:09:56.125 SO libspdk_rdma.so.6.0 00:09:56.125 CC lib/env_dpdk/threads.o 00:09:56.125 SYMLINK libspdk_rdma.so 00:09:56.125 CC lib/env_dpdk/pci_ioat.o 00:09:56.125 LIB libspdk_conf.a 00:09:56.125 SO libspdk_conf.so.6.0 00:09:56.125 LIB libspdk_json.a 00:09:56.125 CC lib/env_dpdk/pci_virtio.o 00:09:56.125 SO libspdk_json.so.6.0 00:09:56.125 SYMLINK libspdk_conf.so 00:09:56.387 CC lib/env_dpdk/pci_vmd.o 00:09:56.387 CC lib/env_dpdk/pci_idxd.o 00:09:56.387 LIB libspdk_vmd.a 00:09:56.387 SYMLINK libspdk_json.so 00:09:56.387 CC lib/env_dpdk/pci_event.o 00:09:56.387 LIB libspdk_idxd.a 00:09:56.387 CC lib/env_dpdk/sigbus_handler.o 00:09:56.387 CC lib/env_dpdk/pci_dpdk.o 00:09:56.387 SO libspdk_vmd.so.6.0 00:09:56.387 SO libspdk_idxd.so.12.0 00:09:56.387 CC lib/env_dpdk/pci_dpdk_2207.o 00:09:56.387 SYMLINK libspdk_idxd.so 00:09:56.387 SYMLINK libspdk_vmd.so 00:09:56.387 CC lib/env_dpdk/pci_dpdk_2211.o 00:09:56.650 CC lib/jsonrpc/jsonrpc_server.o 00:09:56.650 CC lib/jsonrpc/jsonrpc_client.o 00:09:56.650 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:09:56.650 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:09:56.917 LIB libspdk_jsonrpc.a 00:09:56.917 SO libspdk_jsonrpc.so.6.0 00:09:57.186 SYMLINK libspdk_jsonrpc.so 00:09:57.186 LIB libspdk_env_dpdk.a 00:09:57.458 SO libspdk_env_dpdk.so.14.0 00:09:57.458 CC lib/rpc/rpc.o 00:09:57.458 SYMLINK libspdk_env_dpdk.so 00:09:57.458 LIB libspdk_rpc.a 00:09:57.733 SO libspdk_rpc.so.6.0 00:09:57.733 SYMLINK libspdk_rpc.so 00:09:57.994 CC lib/trace/trace.o 00:09:57.995 CC lib/trace/trace_flags.o 00:09:57.995 CC lib/trace/trace_rpc.o 00:09:57.995 CC lib/keyring/keyring.o 00:09:57.995 CC lib/notify/notify.o 00:09:57.995 CC lib/notify/notify_rpc.o 00:09:57.995 CC lib/keyring/keyring_rpc.o 00:09:58.253 LIB libspdk_notify.a 00:09:58.253 LIB libspdk_keyring.a 00:09:58.253 SO libspdk_notify.so.6.0 00:09:58.253 LIB libspdk_trace.a 00:09:58.253 SO libspdk_keyring.so.1.0 00:09:58.253 SO libspdk_trace.so.10.0 00:09:58.253 SYMLINK libspdk_notify.so 00:09:58.253 SYMLINK libspdk_keyring.so 00:09:58.253 SYMLINK libspdk_trace.so 00:09:58.510 CC lib/thread/thread.o 00:09:58.510 CC lib/thread/iobuf.o 00:09:58.510 CC lib/sock/sock_rpc.o 00:09:58.510 CC lib/sock/sock.o 00:09:59.131 LIB libspdk_sock.a 00:09:59.131 SO libspdk_sock.so.9.0 00:09:59.131 SYMLINK libspdk_sock.so 00:09:59.695 CC lib/nvme/nvme_ctrlr_cmd.o 00:09:59.695 CC lib/nvme/nvme_ctrlr.o 00:09:59.695 CC lib/nvme/nvme_fabric.o 00:09:59.695 CC lib/nvme/nvme_ns_cmd.o 00:09:59.695 CC lib/nvme/nvme_ns.o 00:09:59.695 CC lib/nvme/nvme_pcie_common.o 00:09:59.695 CC lib/nvme/nvme_pcie.o 00:09:59.695 CC lib/nvme/nvme_qpair.o 00:09:59.695 CC lib/nvme/nvme.o 00:09:59.953 LIB libspdk_thread.a 00:09:59.953 SO libspdk_thread.so.10.0 00:10:00.210 SYMLINK libspdk_thread.so 00:10:00.211 CC lib/nvme/nvme_quirks.o 00:10:00.211 CC lib/nvme/nvme_transport.o 00:10:00.472 CC lib/nvme/nvme_discovery.o 00:10:00.472 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:10:00.472 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:10:00.472 CC lib/nvme/nvme_tcp.o 00:10:00.472 CC lib/nvme/nvme_opal.o 00:10:00.472 CC lib/nvme/nvme_io_msg.o 00:10:00.731 CC lib/nvme/nvme_poll_group.o 00:10:00.989 CC lib/accel/accel.o 00:10:01.247 CC lib/init/json_config.o 00:10:01.247 CC lib/init/subsystem.o 00:10:01.247 CC lib/blob/blobstore.o 00:10:01.247 CC lib/virtio/virtio.o 00:10:01.247 CC lib/virtio/virtio_vhost_user.o 00:10:01.247 CC lib/init/subsystem_rpc.o 00:10:01.505 CC lib/init/rpc.o 00:10:01.505 CC lib/nvme/nvme_zns.o 00:10:01.505 CC lib/nvme/nvme_stubs.o 00:10:01.505 CC lib/nvme/nvme_auth.o 00:10:01.505 CC lib/blob/request.o 00:10:01.505 LIB libspdk_init.a 00:10:01.505 CC lib/virtio/virtio_vfio_user.o 00:10:01.505 SO libspdk_init.so.5.0 00:10:01.762 SYMLINK libspdk_init.so 00:10:01.762 CC lib/accel/accel_rpc.o 00:10:01.762 CC lib/virtio/virtio_pci.o 00:10:02.020 CC lib/accel/accel_sw.o 00:10:02.020 CC lib/blob/zeroes.o 00:10:02.020 CC lib/nvme/nvme_cuse.o 00:10:02.020 CC lib/blob/blob_bs_dev.o 00:10:02.020 CC lib/nvme/nvme_rdma.o 00:10:02.020 CC lib/event/app.o 00:10:02.020 LIB libspdk_virtio.a 00:10:02.280 SO libspdk_virtio.so.7.0 00:10:02.280 CC lib/event/reactor.o 00:10:02.280 CC lib/event/log_rpc.o 00:10:02.280 LIB libspdk_accel.a 00:10:02.280 CC lib/event/app_rpc.o 00:10:02.280 SO libspdk_accel.so.15.0 00:10:02.280 SYMLINK libspdk_virtio.so 00:10:02.280 CC lib/event/scheduler_static.o 00:10:02.280 SYMLINK libspdk_accel.so 00:10:02.540 LIB libspdk_event.a 00:10:02.540 SO libspdk_event.so.13.0 00:10:02.540 CC lib/bdev/bdev.o 00:10:02.540 CC lib/bdev/bdev_rpc.o 00:10:02.801 CC lib/bdev/part.o 00:10:02.801 CC lib/bdev/bdev_zone.o 00:10:02.801 CC lib/bdev/scsi_nvme.o 00:10:02.801 SYMLINK libspdk_event.so 00:10:03.375 LIB libspdk_nvme.a 00:10:03.632 SO libspdk_nvme.so.13.0 00:10:03.890 SYMLINK libspdk_nvme.so 00:10:04.148 LIB libspdk_blob.a 00:10:04.148 SO libspdk_blob.so.11.0 00:10:04.405 SYMLINK libspdk_blob.so 00:10:04.664 CC lib/lvol/lvol.o 00:10:04.664 CC lib/blobfs/blobfs.o 00:10:04.664 CC lib/blobfs/tree.o 00:10:05.284 LIB libspdk_bdev.a 00:10:05.542 SO libspdk_bdev.so.15.0 00:10:05.542 LIB libspdk_blobfs.a 00:10:05.542 LIB libspdk_lvol.a 00:10:05.542 SO libspdk_blobfs.so.10.0 00:10:05.542 SYMLINK libspdk_bdev.so 00:10:05.542 SO libspdk_lvol.so.10.0 00:10:05.542 SYMLINK libspdk_blobfs.so 00:10:05.542 SYMLINK libspdk_lvol.so 00:10:05.801 CC lib/scsi/lun.o 00:10:05.801 CC lib/scsi/dev.o 00:10:05.801 CC lib/scsi/scsi_bdev.o 00:10:05.801 CC lib/scsi/port.o 00:10:05.801 CC lib/scsi/scsi.o 00:10:05.801 CC lib/ublk/ublk.o 00:10:05.801 CC lib/scsi/scsi_pr.o 00:10:05.801 CC lib/nvmf/ctrlr.o 00:10:05.801 CC lib/nbd/nbd.o 00:10:05.801 CC lib/ftl/ftl_core.o 00:10:06.059 CC lib/ftl/ftl_init.o 00:10:06.059 CC lib/ftl/ftl_layout.o 00:10:06.059 CC lib/nbd/nbd_rpc.o 00:10:06.317 CC lib/ftl/ftl_debug.o 00:10:06.317 CC lib/ftl/ftl_io.o 00:10:06.317 CC lib/nvmf/ctrlr_discovery.o 00:10:06.317 CC lib/nvmf/ctrlr_bdev.o 00:10:06.317 CC lib/ftl/ftl_sb.o 00:10:06.317 LIB libspdk_nbd.a 00:10:06.317 SO libspdk_nbd.so.7.0 00:10:06.317 CC lib/scsi/scsi_rpc.o 00:10:06.317 CC lib/scsi/task.o 00:10:06.317 SYMLINK libspdk_nbd.so 00:10:06.317 CC lib/ublk/ublk_rpc.o 00:10:06.576 CC lib/ftl/ftl_l2p.o 00:10:06.576 CC lib/ftl/ftl_l2p_flat.o 00:10:06.576 CC lib/ftl/ftl_nv_cache.o 00:10:06.576 CC lib/ftl/ftl_band.o 00:10:06.576 CC lib/ftl/ftl_band_ops.o 00:10:06.576 LIB libspdk_ublk.a 00:10:06.576 LIB libspdk_scsi.a 00:10:06.576 SO libspdk_ublk.so.3.0 00:10:06.576 SO libspdk_scsi.so.9.0 00:10:06.833 CC lib/ftl/ftl_writer.o 00:10:06.833 CC lib/ftl/ftl_rq.o 00:10:06.833 SYMLINK libspdk_ublk.so 00:10:06.833 CC lib/nvmf/subsystem.o 00:10:06.833 CC lib/ftl/ftl_reloc.o 00:10:06.833 SYMLINK libspdk_scsi.so 00:10:06.833 CC lib/nvmf/nvmf.o 00:10:06.833 CC lib/nvmf/nvmf_rpc.o 00:10:06.833 CC lib/ftl/ftl_l2p_cache.o 00:10:07.091 CC lib/nvmf/transport.o 00:10:07.091 CC lib/nvmf/tcp.o 00:10:07.091 CC lib/nvmf/stubs.o 00:10:07.091 CC lib/ftl/ftl_p2l.o 00:10:07.658 CC lib/ftl/mngt/ftl_mngt.o 00:10:07.658 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:10:07.658 CC lib/nvmf/mdns_server.o 00:10:07.658 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:10:07.658 CC lib/nvmf/rdma.o 00:10:07.658 CC lib/nvmf/auth.o 00:10:07.658 CC lib/ftl/mngt/ftl_mngt_startup.o 00:10:07.658 CC lib/ftl/mngt/ftl_mngt_md.o 00:10:07.658 CC lib/ftl/mngt/ftl_mngt_misc.o 00:10:07.925 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:10:07.925 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:10:07.925 CC lib/ftl/mngt/ftl_mngt_band.o 00:10:07.925 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:10:07.925 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:10:08.183 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:10:08.183 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:10:08.183 CC lib/vhost/vhost.o 00:10:08.183 CC lib/iscsi/conn.o 00:10:08.183 CC lib/vhost/vhost_rpc.o 00:10:08.183 CC lib/vhost/vhost_scsi.o 00:10:08.183 CC lib/ftl/utils/ftl_conf.o 00:10:08.442 CC lib/vhost/vhost_blk.o 00:10:08.442 CC lib/vhost/rte_vhost_user.o 00:10:08.442 CC lib/iscsi/init_grp.o 00:10:08.442 CC lib/ftl/utils/ftl_md.o 00:10:08.700 CC lib/ftl/utils/ftl_mempool.o 00:10:08.700 CC lib/iscsi/iscsi.o 00:10:08.700 CC lib/iscsi/md5.o 00:10:08.958 CC lib/iscsi/param.o 00:10:08.958 CC lib/iscsi/portal_grp.o 00:10:08.958 CC lib/iscsi/tgt_node.o 00:10:08.958 CC lib/ftl/utils/ftl_bitmap.o 00:10:08.958 CC lib/ftl/utils/ftl_property.o 00:10:09.217 CC lib/iscsi/iscsi_subsystem.o 00:10:09.217 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:10:09.217 CC lib/iscsi/iscsi_rpc.o 00:10:09.217 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:10:09.217 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:10:09.217 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:10:09.474 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:10:09.474 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:10:09.474 LIB libspdk_vhost.a 00:10:09.475 CC lib/iscsi/task.o 00:10:09.475 CC lib/ftl/upgrade/ftl_sb_v3.o 00:10:09.475 SO libspdk_vhost.so.8.0 00:10:09.475 CC lib/ftl/upgrade/ftl_sb_v5.o 00:10:09.475 CC lib/ftl/nvc/ftl_nvc_dev.o 00:10:09.732 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:10:09.732 CC lib/ftl/base/ftl_base_dev.o 00:10:09.732 CC lib/ftl/base/ftl_base_bdev.o 00:10:09.732 SYMLINK libspdk_vhost.so 00:10:09.732 LIB libspdk_nvmf.a 00:10:09.732 CC lib/ftl/ftl_trace.o 00:10:09.732 SO libspdk_nvmf.so.18.0 00:10:09.990 LIB libspdk_ftl.a 00:10:09.990 SYMLINK libspdk_nvmf.so 00:10:09.990 LIB libspdk_iscsi.a 00:10:10.250 SO libspdk_ftl.so.9.0 00:10:10.250 SO libspdk_iscsi.so.8.0 00:10:10.250 SYMLINK libspdk_iscsi.so 00:10:10.516 SYMLINK libspdk_ftl.so 00:10:10.774 CC module/env_dpdk/env_dpdk_rpc.o 00:10:11.032 CC module/accel/error/accel_error.o 00:10:11.032 CC module/sock/uring/uring.o 00:10:11.032 CC module/keyring/file/keyring.o 00:10:11.032 CC module/blob/bdev/blob_bdev.o 00:10:11.032 CC module/accel/iaa/accel_iaa.o 00:10:11.032 CC module/sock/posix/posix.o 00:10:11.032 CC module/accel/ioat/accel_ioat.o 00:10:11.032 CC module/accel/dsa/accel_dsa.o 00:10:11.032 CC module/scheduler/dynamic/scheduler_dynamic.o 00:10:11.032 LIB libspdk_env_dpdk_rpc.a 00:10:11.032 SO libspdk_env_dpdk_rpc.so.6.0 00:10:11.032 CC module/keyring/file/keyring_rpc.o 00:10:11.032 CC module/accel/error/accel_error_rpc.o 00:10:11.032 SYMLINK libspdk_env_dpdk_rpc.so 00:10:11.289 CC module/accel/ioat/accel_ioat_rpc.o 00:10:11.289 CC module/accel/dsa/accel_dsa_rpc.o 00:10:11.289 CC module/accel/iaa/accel_iaa_rpc.o 00:10:11.289 LIB libspdk_blob_bdev.a 00:10:11.289 LIB libspdk_scheduler_dynamic.a 00:10:11.289 SO libspdk_blob_bdev.so.11.0 00:10:11.290 SO libspdk_scheduler_dynamic.so.4.0 00:10:11.290 LIB libspdk_keyring_file.a 00:10:11.290 LIB libspdk_accel_error.a 00:10:11.290 SO libspdk_keyring_file.so.1.0 00:10:11.290 SO libspdk_accel_error.so.2.0 00:10:11.290 SYMLINK libspdk_blob_bdev.so 00:10:11.290 LIB libspdk_accel_dsa.a 00:10:11.290 LIB libspdk_accel_ioat.a 00:10:11.290 SYMLINK libspdk_scheduler_dynamic.so 00:10:11.290 LIB libspdk_accel_iaa.a 00:10:11.290 SYMLINK libspdk_accel_error.so 00:10:11.290 SYMLINK libspdk_keyring_file.so 00:10:11.290 SO libspdk_accel_ioat.so.6.0 00:10:11.290 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:10:11.547 SO libspdk_accel_dsa.so.5.0 00:10:11.547 SO libspdk_accel_iaa.so.3.0 00:10:11.547 SYMLINK libspdk_accel_ioat.so 00:10:11.547 SYMLINK libspdk_accel_iaa.so 00:10:11.547 SYMLINK libspdk_accel_dsa.so 00:10:11.547 CC module/scheduler/gscheduler/gscheduler.o 00:10:11.547 LIB libspdk_scheduler_dpdk_governor.a 00:10:11.547 LIB libspdk_sock_uring.a 00:10:11.547 SO libspdk_scheduler_dpdk_governor.so.4.0 00:10:11.547 SO libspdk_sock_uring.so.5.0 00:10:11.804 CC module/bdev/delay/vbdev_delay.o 00:10:11.804 CC module/bdev/error/vbdev_error.o 00:10:11.804 CC module/blobfs/bdev/blobfs_bdev.o 00:10:11.804 LIB libspdk_scheduler_gscheduler.a 00:10:11.804 CC module/bdev/gpt/gpt.o 00:10:11.804 SYMLINK libspdk_scheduler_dpdk_governor.so 00:10:11.804 CC module/bdev/malloc/bdev_malloc.o 00:10:11.804 CC module/bdev/malloc/bdev_malloc_rpc.o 00:10:11.804 CC module/bdev/lvol/vbdev_lvol.o 00:10:11.804 LIB libspdk_sock_posix.a 00:10:11.804 SO libspdk_scheduler_gscheduler.so.4.0 00:10:11.804 SYMLINK libspdk_sock_uring.so 00:10:11.804 SO libspdk_sock_posix.so.6.0 00:10:11.804 CC module/bdev/error/vbdev_error_rpc.o 00:10:11.804 SYMLINK libspdk_scheduler_gscheduler.so 00:10:11.804 CC module/bdev/gpt/vbdev_gpt.o 00:10:11.804 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:10:11.804 SYMLINK libspdk_sock_posix.so 00:10:11.804 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:10:11.804 CC module/bdev/delay/vbdev_delay_rpc.o 00:10:12.061 LIB libspdk_bdev_error.a 00:10:12.061 SO libspdk_bdev_error.so.6.0 00:10:12.061 LIB libspdk_blobfs_bdev.a 00:10:12.061 LIB libspdk_bdev_delay.a 00:10:12.061 SYMLINK libspdk_bdev_error.so 00:10:12.061 LIB libspdk_bdev_malloc.a 00:10:12.061 SO libspdk_blobfs_bdev.so.6.0 00:10:12.061 LIB libspdk_bdev_gpt.a 00:10:12.061 SO libspdk_bdev_delay.so.6.0 00:10:12.061 SO libspdk_bdev_malloc.so.6.0 00:10:12.062 CC module/bdev/nvme/bdev_nvme.o 00:10:12.320 SO libspdk_bdev_gpt.so.6.0 00:10:12.320 CC module/bdev/null/bdev_null.o 00:10:12.320 SYMLINK libspdk_blobfs_bdev.so 00:10:12.320 SYMLINK libspdk_bdev_delay.so 00:10:12.320 CC module/bdev/nvme/bdev_nvme_rpc.o 00:10:12.320 CC module/bdev/nvme/nvme_rpc.o 00:10:12.320 SYMLINK libspdk_bdev_gpt.so 00:10:12.320 SYMLINK libspdk_bdev_malloc.so 00:10:12.320 CC module/bdev/nvme/bdev_mdns_client.o 00:10:12.320 CC module/bdev/nvme/vbdev_opal.o 00:10:12.320 LIB libspdk_bdev_lvol.a 00:10:12.320 CC module/bdev/passthru/vbdev_passthru.o 00:10:12.320 SO libspdk_bdev_lvol.so.6.0 00:10:12.320 CC module/bdev/raid/bdev_raid.o 00:10:12.320 SYMLINK libspdk_bdev_lvol.so 00:10:12.320 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:10:12.578 CC module/bdev/nvme/vbdev_opal_rpc.o 00:10:12.578 CC module/bdev/split/vbdev_split.o 00:10:12.578 CC module/bdev/null/bdev_null_rpc.o 00:10:12.578 CC module/bdev/split/vbdev_split_rpc.o 00:10:12.578 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:10:12.578 LIB libspdk_bdev_passthru.a 00:10:12.836 LIB libspdk_bdev_null.a 00:10:12.836 SO libspdk_bdev_passthru.so.6.0 00:10:12.836 SO libspdk_bdev_null.so.6.0 00:10:12.836 SYMLINK libspdk_bdev_passthru.so 00:10:12.836 SYMLINK libspdk_bdev_null.so 00:10:12.836 CC module/bdev/zone_block/vbdev_zone_block.o 00:10:12.836 CC module/bdev/raid/bdev_raid_rpc.o 00:10:12.836 LIB libspdk_bdev_split.a 00:10:12.836 CC module/bdev/uring/bdev_uring.o 00:10:12.836 SO libspdk_bdev_split.so.6.0 00:10:13.103 CC module/bdev/aio/bdev_aio.o 00:10:13.103 CC module/bdev/iscsi/bdev_iscsi.o 00:10:13.103 CC module/bdev/virtio/bdev_virtio_scsi.o 00:10:13.103 CC module/bdev/ftl/bdev_ftl.o 00:10:13.103 SYMLINK libspdk_bdev_split.so 00:10:13.103 CC module/bdev/virtio/bdev_virtio_blk.o 00:10:13.103 CC module/bdev/aio/bdev_aio_rpc.o 00:10:13.103 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:10:13.361 CC module/bdev/uring/bdev_uring_rpc.o 00:10:13.361 CC module/bdev/virtio/bdev_virtio_rpc.o 00:10:13.361 LIB libspdk_bdev_aio.a 00:10:13.361 CC module/bdev/raid/bdev_raid_sb.o 00:10:13.361 CC module/bdev/raid/raid0.o 00:10:13.361 CC module/bdev/ftl/bdev_ftl_rpc.o 00:10:13.361 LIB libspdk_bdev_zone_block.a 00:10:13.361 SO libspdk_bdev_aio.so.6.0 00:10:13.361 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:10:13.361 SO libspdk_bdev_zone_block.so.6.0 00:10:13.361 SYMLINK libspdk_bdev_aio.so 00:10:13.361 LIB libspdk_bdev_uring.a 00:10:13.619 SYMLINK libspdk_bdev_zone_block.so 00:10:13.619 CC module/bdev/raid/raid1.o 00:10:13.619 CC module/bdev/raid/concat.o 00:10:13.619 SO libspdk_bdev_uring.so.6.0 00:10:13.619 LIB libspdk_bdev_iscsi.a 00:10:13.619 LIB libspdk_bdev_virtio.a 00:10:13.619 LIB libspdk_bdev_ftl.a 00:10:13.619 SYMLINK libspdk_bdev_uring.so 00:10:13.619 SO libspdk_bdev_iscsi.so.6.0 00:10:13.619 SO libspdk_bdev_ftl.so.6.0 00:10:13.619 SO libspdk_bdev_virtio.so.6.0 00:10:13.619 SYMLINK libspdk_bdev_ftl.so 00:10:13.619 SYMLINK libspdk_bdev_iscsi.so 00:10:13.619 SYMLINK libspdk_bdev_virtio.so 00:10:13.876 LIB libspdk_bdev_raid.a 00:10:13.876 SO libspdk_bdev_raid.so.6.0 00:10:13.876 SYMLINK libspdk_bdev_raid.so 00:10:14.442 LIB libspdk_bdev_nvme.a 00:10:14.442 SO libspdk_bdev_nvme.so.7.0 00:10:14.700 SYMLINK libspdk_bdev_nvme.so 00:10:15.266 CC module/event/subsystems/keyring/keyring.o 00:10:15.266 CC module/event/subsystems/iobuf/iobuf.o 00:10:15.266 CC module/event/subsystems/sock/sock.o 00:10:15.266 CC module/event/subsystems/vmd/vmd.o 00:10:15.266 CC module/event/subsystems/vmd/vmd_rpc.o 00:10:15.266 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:10:15.266 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:10:15.266 CC module/event/subsystems/scheduler/scheduler.o 00:10:15.524 LIB libspdk_event_sock.a 00:10:15.524 LIB libspdk_event_keyring.a 00:10:15.524 LIB libspdk_event_vhost_blk.a 00:10:15.524 SO libspdk_event_sock.so.5.0 00:10:15.524 SO libspdk_event_keyring.so.1.0 00:10:15.524 LIB libspdk_event_vmd.a 00:10:15.524 LIB libspdk_event_scheduler.a 00:10:15.524 LIB libspdk_event_iobuf.a 00:10:15.524 SO libspdk_event_vhost_blk.so.3.0 00:10:15.524 SO libspdk_event_scheduler.so.4.0 00:10:15.524 SO libspdk_event_vmd.so.6.0 00:10:15.524 SYMLINK libspdk_event_sock.so 00:10:15.524 SO libspdk_event_iobuf.so.3.0 00:10:15.524 SYMLINK libspdk_event_keyring.so 00:10:15.524 SYMLINK libspdk_event_vhost_blk.so 00:10:15.524 SYMLINK libspdk_event_scheduler.so 00:10:15.524 SYMLINK libspdk_event_iobuf.so 00:10:15.524 SYMLINK libspdk_event_vmd.so 00:10:15.783 CC module/event/subsystems/accel/accel.o 00:10:16.040 LIB libspdk_event_accel.a 00:10:16.040 SO libspdk_event_accel.so.6.0 00:10:16.338 SYMLINK libspdk_event_accel.so 00:10:16.601 CC module/event/subsystems/bdev/bdev.o 00:10:16.880 LIB libspdk_event_bdev.a 00:10:16.880 SO libspdk_event_bdev.so.6.0 00:10:16.880 SYMLINK libspdk_event_bdev.so 00:10:17.137 CC module/event/subsystems/nbd/nbd.o 00:10:17.137 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:10:17.137 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:10:17.137 CC module/event/subsystems/ublk/ublk.o 00:10:17.137 CC module/event/subsystems/scsi/scsi.o 00:10:17.396 LIB libspdk_event_nbd.a 00:10:17.396 LIB libspdk_event_ublk.a 00:10:17.396 SO libspdk_event_nbd.so.6.0 00:10:17.396 LIB libspdk_event_scsi.a 00:10:17.396 SO libspdk_event_ublk.so.3.0 00:10:17.396 LIB libspdk_event_nvmf.a 00:10:17.396 SO libspdk_event_scsi.so.6.0 00:10:17.396 SYMLINK libspdk_event_nbd.so 00:10:17.396 SO libspdk_event_nvmf.so.6.0 00:10:17.396 SYMLINK libspdk_event_ublk.so 00:10:17.396 SYMLINK libspdk_event_scsi.so 00:10:17.655 SYMLINK libspdk_event_nvmf.so 00:10:17.913 CC module/event/subsystems/iscsi/iscsi.o 00:10:17.913 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:10:17.913 LIB libspdk_event_vhost_scsi.a 00:10:17.913 LIB libspdk_event_iscsi.a 00:10:17.913 SO libspdk_event_vhost_scsi.so.3.0 00:10:18.170 SO libspdk_event_iscsi.so.6.0 00:10:18.170 SYMLINK libspdk_event_vhost_scsi.so 00:10:18.170 SYMLINK libspdk_event_iscsi.so 00:10:18.431 SO libspdk.so.6.0 00:10:18.431 SYMLINK libspdk.so 00:10:18.689 CXX app/trace/trace.o 00:10:18.689 CC examples/ioat/perf/perf.o 00:10:18.689 CC examples/nvme/hello_world/hello_world.o 00:10:18.689 CC examples/accel/perf/accel_perf.o 00:10:18.689 CC examples/blob/hello_world/hello_blob.o 00:10:18.689 CC examples/bdev/hello_world/hello_bdev.o 00:10:18.689 CC test/bdev/bdevio/bdevio.o 00:10:18.689 CC test/accel/dif/dif.o 00:10:18.689 CC test/blobfs/mkfs/mkfs.o 00:10:18.689 CC test/app/bdev_svc/bdev_svc.o 00:10:18.946 LINK bdev_svc 00:10:18.946 LINK hello_world 00:10:18.946 LINK ioat_perf 00:10:18.946 LINK mkfs 00:10:18.946 LINK hello_bdev 00:10:19.204 LINK hello_blob 00:10:19.204 LINK spdk_trace 00:10:19.204 LINK bdevio 00:10:19.204 LINK dif 00:10:19.204 LINK accel_perf 00:10:19.204 CC examples/ioat/verify/verify.o 00:10:19.462 CC examples/nvme/reconnect/reconnect.o 00:10:19.462 CC test/app/histogram_perf/histogram_perf.o 00:10:19.462 CC app/trace_record/trace_record.o 00:10:19.462 CC examples/bdev/bdevperf/bdevperf.o 00:10:19.462 CC examples/blob/cli/blobcli.o 00:10:19.462 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:10:19.462 LINK verify 00:10:19.462 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:10:19.719 CC test/app/jsoncat/jsoncat.o 00:10:19.719 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:10:19.719 LINK histogram_perf 00:10:19.719 LINK spdk_trace_record 00:10:19.719 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:10:19.719 LINK reconnect 00:10:19.719 LINK jsoncat 00:10:19.977 CC test/app/stub/stub.o 00:10:19.977 LINK blobcli 00:10:19.977 LINK nvme_fuzz 00:10:19.977 CC app/nvmf_tgt/nvmf_main.o 00:10:19.977 CC examples/sock/hello_world/hello_sock.o 00:10:19.977 CC examples/nvme/nvme_manage/nvme_manage.o 00:10:20.234 LINK stub 00:10:20.234 LINK vhost_fuzz 00:10:20.234 CC examples/vmd/lsvmd/lsvmd.o 00:10:20.234 LINK nvmf_tgt 00:10:20.234 LINK bdevperf 00:10:20.234 LINK hello_sock 00:10:20.234 CC examples/vmd/led/led.o 00:10:20.492 CC app/iscsi_tgt/iscsi_tgt.o 00:10:20.492 LINK lsvmd 00:10:20.492 LINK led 00:10:20.492 CC app/spdk_tgt/spdk_tgt.o 00:10:20.492 CC app/spdk_lspci/spdk_lspci.o 00:10:20.492 LINK nvme_manage 00:10:20.492 CC app/spdk_nvme_perf/perf.o 00:10:20.492 LINK iscsi_tgt 00:10:20.748 CC app/spdk_nvme_identify/identify.o 00:10:20.748 CC app/spdk_nvme_discover/discovery_aer.o 00:10:20.748 LINK spdk_lspci 00:10:20.748 CC examples/nvmf/nvmf/nvmf.o 00:10:20.748 CC examples/nvme/arbitration/arbitration.o 00:10:20.748 LINK spdk_tgt 00:10:21.005 CC app/spdk_top/spdk_top.o 00:10:21.005 LINK spdk_nvme_discover 00:10:21.005 LINK nvmf 00:10:21.005 CC app/spdk_dd/spdk_dd.o 00:10:21.005 CC app/vhost/vhost.o 00:10:21.262 CC examples/nvme/hotplug/hotplug.o 00:10:21.262 LINK iscsi_fuzz 00:10:21.263 LINK arbitration 00:10:21.520 LINK vhost 00:10:21.520 CC app/fio/nvme/fio_plugin.o 00:10:21.520 CC examples/nvme/cmb_copy/cmb_copy.o 00:10:21.520 LINK hotplug 00:10:21.520 LINK spdk_nvme_identify 00:10:21.520 CC examples/nvme/abort/abort.o 00:10:21.520 LINK spdk_nvme_perf 00:10:21.520 LINK spdk_dd 00:10:21.778 TEST_HEADER include/spdk/accel.h 00:10:21.778 TEST_HEADER include/spdk/accel_module.h 00:10:21.778 TEST_HEADER include/spdk/assert.h 00:10:21.778 TEST_HEADER include/spdk/barrier.h 00:10:21.778 TEST_HEADER include/spdk/base64.h 00:10:21.778 TEST_HEADER include/spdk/bdev.h 00:10:21.778 TEST_HEADER include/spdk/bdev_module.h 00:10:21.778 TEST_HEADER include/spdk/bdev_zone.h 00:10:21.778 TEST_HEADER include/spdk/bit_array.h 00:10:21.778 TEST_HEADER include/spdk/bit_pool.h 00:10:21.778 TEST_HEADER include/spdk/blob_bdev.h 00:10:21.778 LINK cmb_copy 00:10:21.778 TEST_HEADER include/spdk/blobfs_bdev.h 00:10:21.778 TEST_HEADER include/spdk/blobfs.h 00:10:21.778 TEST_HEADER include/spdk/blob.h 00:10:21.778 TEST_HEADER include/spdk/conf.h 00:10:21.778 TEST_HEADER include/spdk/config.h 00:10:21.778 TEST_HEADER include/spdk/cpuset.h 00:10:21.778 TEST_HEADER include/spdk/crc16.h 00:10:21.778 TEST_HEADER include/spdk/crc32.h 00:10:21.778 TEST_HEADER include/spdk/crc64.h 00:10:21.778 TEST_HEADER include/spdk/dif.h 00:10:21.778 TEST_HEADER include/spdk/dma.h 00:10:21.778 TEST_HEADER include/spdk/endian.h 00:10:21.778 TEST_HEADER include/spdk/env_dpdk.h 00:10:21.778 TEST_HEADER include/spdk/env.h 00:10:21.778 TEST_HEADER include/spdk/event.h 00:10:21.778 TEST_HEADER include/spdk/fd_group.h 00:10:21.778 TEST_HEADER include/spdk/fd.h 00:10:21.778 TEST_HEADER include/spdk/file.h 00:10:21.778 TEST_HEADER include/spdk/ftl.h 00:10:21.778 TEST_HEADER include/spdk/gpt_spec.h 00:10:21.778 TEST_HEADER include/spdk/hexlify.h 00:10:21.778 TEST_HEADER include/spdk/histogram_data.h 00:10:21.778 TEST_HEADER include/spdk/idxd.h 00:10:21.778 TEST_HEADER include/spdk/idxd_spec.h 00:10:21.778 TEST_HEADER include/spdk/init.h 00:10:21.778 TEST_HEADER include/spdk/ioat.h 00:10:21.778 LINK spdk_top 00:10:21.778 TEST_HEADER include/spdk/ioat_spec.h 00:10:21.778 TEST_HEADER include/spdk/iscsi_spec.h 00:10:21.778 TEST_HEADER include/spdk/json.h 00:10:21.778 TEST_HEADER include/spdk/jsonrpc.h 00:10:21.778 TEST_HEADER include/spdk/keyring.h 00:10:21.778 TEST_HEADER include/spdk/keyring_module.h 00:10:21.778 TEST_HEADER include/spdk/likely.h 00:10:21.778 TEST_HEADER include/spdk/log.h 00:10:21.778 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:10:21.778 TEST_HEADER include/spdk/lvol.h 00:10:21.778 TEST_HEADER include/spdk/memory.h 00:10:21.778 TEST_HEADER include/spdk/mmio.h 00:10:21.778 TEST_HEADER include/spdk/nbd.h 00:10:21.778 TEST_HEADER include/spdk/notify.h 00:10:21.778 TEST_HEADER include/spdk/nvme.h 00:10:21.778 TEST_HEADER include/spdk/nvme_intel.h 00:10:21.778 TEST_HEADER include/spdk/nvme_ocssd.h 00:10:22.036 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:10:22.036 TEST_HEADER include/spdk/nvme_spec.h 00:10:22.036 TEST_HEADER include/spdk/nvme_zns.h 00:10:22.036 TEST_HEADER include/spdk/nvmf_cmd.h 00:10:22.036 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:10:22.036 TEST_HEADER include/spdk/nvmf.h 00:10:22.036 TEST_HEADER include/spdk/nvmf_spec.h 00:10:22.036 CC app/fio/bdev/fio_plugin.o 00:10:22.036 TEST_HEADER include/spdk/nvmf_transport.h 00:10:22.036 TEST_HEADER include/spdk/opal.h 00:10:22.036 TEST_HEADER include/spdk/opal_spec.h 00:10:22.036 TEST_HEADER include/spdk/pci_ids.h 00:10:22.036 TEST_HEADER include/spdk/pipe.h 00:10:22.036 TEST_HEADER include/spdk/queue.h 00:10:22.036 TEST_HEADER include/spdk/reduce.h 00:10:22.036 TEST_HEADER include/spdk/rpc.h 00:10:22.036 TEST_HEADER include/spdk/scheduler.h 00:10:22.036 TEST_HEADER include/spdk/scsi.h 00:10:22.036 TEST_HEADER include/spdk/scsi_spec.h 00:10:22.036 TEST_HEADER include/spdk/sock.h 00:10:22.036 LINK abort 00:10:22.036 TEST_HEADER include/spdk/stdinc.h 00:10:22.036 TEST_HEADER include/spdk/string.h 00:10:22.036 TEST_HEADER include/spdk/thread.h 00:10:22.036 TEST_HEADER include/spdk/trace.h 00:10:22.036 TEST_HEADER include/spdk/trace_parser.h 00:10:22.036 TEST_HEADER include/spdk/tree.h 00:10:22.036 TEST_HEADER include/spdk/ublk.h 00:10:22.036 TEST_HEADER include/spdk/util.h 00:10:22.036 TEST_HEADER include/spdk/uuid.h 00:10:22.036 TEST_HEADER include/spdk/version.h 00:10:22.036 TEST_HEADER include/spdk/vfio_user_pci.h 00:10:22.036 TEST_HEADER include/spdk/vfio_user_spec.h 00:10:22.036 TEST_HEADER include/spdk/vhost.h 00:10:22.036 TEST_HEADER include/spdk/vmd.h 00:10:22.036 TEST_HEADER include/spdk/xor.h 00:10:22.036 TEST_HEADER include/spdk/zipf.h 00:10:22.036 CXX test/cpp_headers/accel.o 00:10:22.036 CC test/dma/test_dma/test_dma.o 00:10:22.036 CC test/env/vtophys/vtophys.o 00:10:22.036 LINK pmr_persistence 00:10:22.036 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:10:22.293 CXX test/cpp_headers/accel_module.o 00:10:22.293 CXX test/cpp_headers/assert.o 00:10:22.293 CC test/env/mem_callbacks/mem_callbacks.o 00:10:22.293 LINK vtophys 00:10:22.293 LINK spdk_nvme 00:10:22.293 LINK env_dpdk_post_init 00:10:22.293 CXX test/cpp_headers/barrier.o 00:10:22.551 LINK spdk_bdev 00:10:22.551 CXX test/cpp_headers/base64.o 00:10:22.551 CC test/event/event_perf/event_perf.o 00:10:22.551 CC examples/util/zipf/zipf.o 00:10:22.551 LINK test_dma 00:10:22.551 CC test/event/reactor/reactor.o 00:10:22.551 CC examples/thread/thread/thread_ex.o 00:10:22.551 CC examples/idxd/perf/perf.o 00:10:22.551 CXX test/cpp_headers/bdev.o 00:10:22.551 LINK event_perf 00:10:22.808 LINK zipf 00:10:22.808 CC test/event/reactor_perf/reactor_perf.o 00:10:22.808 CC test/event/app_repeat/app_repeat.o 00:10:22.808 LINK reactor 00:10:22.808 CXX test/cpp_headers/bdev_module.o 00:10:22.808 LINK mem_callbacks 00:10:22.808 CXX test/cpp_headers/bdev_zone.o 00:10:22.808 LINK reactor_perf 00:10:22.808 LINK app_repeat 00:10:23.065 LINK thread 00:10:23.065 CC test/env/memory/memory_ut.o 00:10:23.065 LINK idxd_perf 00:10:23.065 CXX test/cpp_headers/bit_array.o 00:10:23.065 CXX test/cpp_headers/bit_pool.o 00:10:23.065 CC test/event/scheduler/scheduler.o 00:10:23.065 CC examples/interrupt_tgt/interrupt_tgt.o 00:10:23.065 CC test/env/pci/pci_ut.o 00:10:23.065 CXX test/cpp_headers/blob_bdev.o 00:10:23.065 CXX test/cpp_headers/blobfs_bdev.o 00:10:23.065 CXX test/cpp_headers/blobfs.o 00:10:23.323 CXX test/cpp_headers/blob.o 00:10:23.323 LINK interrupt_tgt 00:10:23.323 CXX test/cpp_headers/conf.o 00:10:23.323 LINK scheduler 00:10:23.582 CC test/lvol/esnap/esnap.o 00:10:23.582 CC test/nvme/reset/reset.o 00:10:23.582 CC test/nvme/aer/aer.o 00:10:23.582 CC test/nvme/sgl/sgl.o 00:10:23.582 CXX test/cpp_headers/config.o 00:10:23.582 LINK pci_ut 00:10:23.582 CXX test/cpp_headers/cpuset.o 00:10:23.582 CC test/nvme/e2edp/nvme_dp.o 00:10:23.582 CC test/nvme/overhead/overhead.o 00:10:23.840 CC test/rpc_client/rpc_client_test.o 00:10:23.840 CXX test/cpp_headers/crc16.o 00:10:23.840 LINK aer 00:10:23.840 LINK nvme_dp 00:10:23.840 LINK memory_ut 00:10:23.840 LINK sgl 00:10:24.099 LINK rpc_client_test 00:10:24.099 LINK reset 00:10:24.099 CXX test/cpp_headers/crc32.o 00:10:24.099 LINK overhead 00:10:24.099 CXX test/cpp_headers/crc64.o 00:10:24.099 CXX test/cpp_headers/dif.o 00:10:24.099 CXX test/cpp_headers/dma.o 00:10:24.099 CXX test/cpp_headers/endian.o 00:10:24.099 CXX test/cpp_headers/env_dpdk.o 00:10:24.099 CXX test/cpp_headers/env.o 00:10:24.356 CC test/nvme/err_injection/err_injection.o 00:10:24.356 CXX test/cpp_headers/event.o 00:10:24.356 CXX test/cpp_headers/fd_group.o 00:10:24.356 CXX test/cpp_headers/fd.o 00:10:24.356 CXX test/cpp_headers/file.o 00:10:24.356 CXX test/cpp_headers/ftl.o 00:10:24.356 CC test/thread/poller_perf/poller_perf.o 00:10:24.614 CC test/nvme/reserve/reserve.o 00:10:24.614 CC test/nvme/startup/startup.o 00:10:24.614 CXX test/cpp_headers/gpt_spec.o 00:10:24.614 LINK err_injection 00:10:24.614 CC test/nvme/simple_copy/simple_copy.o 00:10:24.614 LINK poller_perf 00:10:24.614 CC test/nvme/connect_stress/connect_stress.o 00:10:24.614 CXX test/cpp_headers/hexlify.o 00:10:24.872 CC test/nvme/boot_partition/boot_partition.o 00:10:24.872 CXX test/cpp_headers/histogram_data.o 00:10:24.872 CC test/nvme/compliance/nvme_compliance.o 00:10:24.872 LINK reserve 00:10:24.872 LINK startup 00:10:24.872 CXX test/cpp_headers/idxd.o 00:10:24.872 LINK connect_stress 00:10:25.131 LINK simple_copy 00:10:25.131 LINK boot_partition 00:10:25.131 CC test/nvme/fused_ordering/fused_ordering.o 00:10:25.131 CC test/nvme/fdp/fdp.o 00:10:25.131 CC test/nvme/doorbell_aers/doorbell_aers.o 00:10:25.131 CXX test/cpp_headers/idxd_spec.o 00:10:25.131 CXX test/cpp_headers/init.o 00:10:25.131 CXX test/cpp_headers/ioat.o 00:10:25.388 CXX test/cpp_headers/ioat_spec.o 00:10:25.388 LINK nvme_compliance 00:10:25.388 CC test/nvme/cuse/cuse.o 00:10:25.388 LINK fused_ordering 00:10:25.388 CXX test/cpp_headers/iscsi_spec.o 00:10:25.388 CXX test/cpp_headers/json.o 00:10:25.388 LINK doorbell_aers 00:10:25.388 CXX test/cpp_headers/jsonrpc.o 00:10:25.388 CXX test/cpp_headers/keyring.o 00:10:25.646 CXX test/cpp_headers/keyring_module.o 00:10:25.646 CXX test/cpp_headers/likely.o 00:10:25.646 LINK fdp 00:10:25.646 CXX test/cpp_headers/log.o 00:10:25.646 CXX test/cpp_headers/lvol.o 00:10:25.646 CXX test/cpp_headers/memory.o 00:10:25.646 CXX test/cpp_headers/mmio.o 00:10:25.646 CXX test/cpp_headers/nbd.o 00:10:25.646 CXX test/cpp_headers/notify.o 00:10:25.904 CXX test/cpp_headers/nvme.o 00:10:25.904 CXX test/cpp_headers/nvme_intel.o 00:10:25.904 CXX test/cpp_headers/nvme_ocssd.o 00:10:25.904 CXX test/cpp_headers/nvme_ocssd_spec.o 00:10:25.904 CXX test/cpp_headers/nvme_spec.o 00:10:25.904 CXX test/cpp_headers/nvme_zns.o 00:10:25.904 CXX test/cpp_headers/nvmf_cmd.o 00:10:25.904 CXX test/cpp_headers/nvmf_fc_spec.o 00:10:25.904 CXX test/cpp_headers/nvmf.o 00:10:26.162 CXX test/cpp_headers/nvmf_spec.o 00:10:26.162 CXX test/cpp_headers/nvmf_transport.o 00:10:26.162 CXX test/cpp_headers/opal.o 00:10:26.162 CXX test/cpp_headers/opal_spec.o 00:10:26.162 CXX test/cpp_headers/pci_ids.o 00:10:26.162 CXX test/cpp_headers/pipe.o 00:10:26.162 CXX test/cpp_headers/queue.o 00:10:26.162 CXX test/cpp_headers/reduce.o 00:10:26.162 CXX test/cpp_headers/rpc.o 00:10:26.162 CXX test/cpp_headers/scheduler.o 00:10:26.162 CXX test/cpp_headers/scsi.o 00:10:26.162 CXX test/cpp_headers/scsi_spec.o 00:10:26.162 CXX test/cpp_headers/sock.o 00:10:26.420 CXX test/cpp_headers/stdinc.o 00:10:26.420 CXX test/cpp_headers/string.o 00:10:26.420 CXX test/cpp_headers/thread.o 00:10:26.420 CXX test/cpp_headers/trace.o 00:10:26.420 CXX test/cpp_headers/trace_parser.o 00:10:26.420 CXX test/cpp_headers/tree.o 00:10:26.420 CXX test/cpp_headers/ublk.o 00:10:26.420 CXX test/cpp_headers/util.o 00:10:26.420 CXX test/cpp_headers/uuid.o 00:10:26.420 CXX test/cpp_headers/version.o 00:10:26.420 CXX test/cpp_headers/vfio_user_pci.o 00:10:26.678 CXX test/cpp_headers/vfio_user_spec.o 00:10:26.678 CXX test/cpp_headers/vhost.o 00:10:26.678 CXX test/cpp_headers/vmd.o 00:10:26.678 LINK cuse 00:10:26.678 CXX test/cpp_headers/xor.o 00:10:26.678 CXX test/cpp_headers/zipf.o 00:10:28.578 LINK esnap 00:10:28.836 00:10:28.836 real 0m57.663s 00:10:28.836 user 4m59.823s 00:10:28.836 sys 1m22.596s 00:10:28.836 13:28:41 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:10:28.836 ************************************ 00:10:28.836 END TEST make 00:10:28.836 ************************************ 00:10:28.836 13:28:41 make -- common/autotest_common.sh@10 -- $ set +x 00:10:28.836 13:28:41 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:10:28.836 13:28:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:10:28.836 13:28:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:10:28.836 13:28:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:28.836 13:28:41 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:10:28.836 13:28:41 -- pm/common@44 -- $ pid=5795 00:10:28.836 13:28:41 -- pm/common@50 -- $ kill -TERM 5795 00:10:28.836 13:28:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:28.836 13:28:41 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:10:28.836 13:28:41 -- pm/common@44 -- $ pid=5797 00:10:28.836 13:28:41 -- pm/common@50 -- $ kill -TERM 5797 00:10:29.095 13:28:41 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:29.095 13:28:41 -- nvmf/common.sh@7 -- # uname -s 00:10:29.095 13:28:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.095 13:28:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.095 13:28:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.095 13:28:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.095 13:28:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.095 13:28:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.095 13:28:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.095 13:28:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.095 13:28:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.095 13:28:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.095 13:28:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:10:29.095 13:28:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:10:29.095 13:28:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.095 13:28:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.095 13:28:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:29.095 13:28:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.095 13:28:41 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:29.095 13:28:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.095 13:28:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.095 13:28:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.095 13:28:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.095 13:28:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.096 13:28:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.096 13:28:41 -- paths/export.sh@5 -- # export PATH 00:10:29.096 13:28:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.096 13:28:42 -- nvmf/common.sh@47 -- # : 0 00:10:29.096 13:28:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:29.096 13:28:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:29.096 13:28:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.096 13:28:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.096 13:28:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.096 13:28:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:29.096 13:28:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:29.096 13:28:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:29.096 13:28:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:10:29.096 13:28:42 -- spdk/autotest.sh@32 -- # uname -s 00:10:29.096 13:28:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:10:29.096 13:28:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:10:29.096 13:28:42 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:10:29.096 13:28:42 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:10:29.096 13:28:42 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:10:29.096 13:28:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:10:29.096 13:28:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:10:29.096 13:28:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:10:29.096 13:28:42 -- spdk/autotest.sh@48 -- # udevadm_pid=65972 00:10:29.096 13:28:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:10:29.096 13:28:42 -- pm/common@17 -- # local monitor 00:10:29.096 13:28:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:29.096 13:28:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:29.096 13:28:42 -- pm/common@25 -- # sleep 1 00:10:29.096 13:28:42 -- pm/common@21 -- # date +%s 00:10:29.096 13:28:42 -- pm/common@21 -- # date +%s 00:10:29.096 13:28:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:10:29.096 13:28:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715779722 00:10:29.096 13:28:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715779722 00:10:29.096 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715779722_collect-vmstat.pm.log 00:10:29.096 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715779722_collect-cpu-load.pm.log 00:10:30.050 13:28:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:10:30.050 13:28:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:10:30.050 13:28:43 -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:30.050 13:28:43 -- common/autotest_common.sh@10 -- # set +x 00:10:30.050 13:28:43 -- spdk/autotest.sh@59 -- # create_test_list 00:10:30.050 13:28:43 -- common/autotest_common.sh@744 -- # xtrace_disable 00:10:30.050 13:28:43 -- common/autotest_common.sh@10 -- # set +x 00:10:30.050 13:28:43 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:10:30.050 13:28:43 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:10:30.050 13:28:43 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:10:30.050 13:28:43 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:10:30.050 13:28:43 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:10:30.050 13:28:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:10:30.050 13:28:43 -- common/autotest_common.sh@1451 -- # uname 00:10:30.050 13:28:43 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:10:30.050 13:28:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:10:30.050 13:28:43 -- common/autotest_common.sh@1471 -- # uname 00:10:30.308 13:28:43 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:10:30.308 13:28:43 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:10:30.308 13:28:43 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:10:30.308 13:28:43 -- spdk/autotest.sh@72 -- # hash lcov 00:10:30.308 13:28:43 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:10:30.308 13:28:43 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:10:30.308 --rc lcov_branch_coverage=1 00:10:30.308 --rc lcov_function_coverage=1 00:10:30.308 --rc genhtml_branch_coverage=1 00:10:30.308 --rc genhtml_function_coverage=1 00:10:30.308 --rc genhtml_legend=1 00:10:30.308 --rc geninfo_all_blocks=1 00:10:30.308 ' 00:10:30.308 13:28:43 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:10:30.308 --rc lcov_branch_coverage=1 00:10:30.308 --rc lcov_function_coverage=1 00:10:30.308 --rc genhtml_branch_coverage=1 00:10:30.308 --rc genhtml_function_coverage=1 00:10:30.308 --rc genhtml_legend=1 00:10:30.308 --rc geninfo_all_blocks=1 00:10:30.308 ' 00:10:30.308 13:28:43 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:10:30.308 --rc lcov_branch_coverage=1 00:10:30.308 --rc lcov_function_coverage=1 00:10:30.308 --rc genhtml_branch_coverage=1 00:10:30.308 --rc genhtml_function_coverage=1 00:10:30.308 --rc genhtml_legend=1 00:10:30.308 --rc geninfo_all_blocks=1 00:10:30.308 --no-external' 00:10:30.308 13:28:43 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:10:30.308 --rc lcov_branch_coverage=1 00:10:30.308 --rc lcov_function_coverage=1 00:10:30.308 --rc genhtml_branch_coverage=1 00:10:30.308 --rc genhtml_function_coverage=1 00:10:30.308 --rc genhtml_legend=1 00:10:30.308 --rc geninfo_all_blocks=1 00:10:30.308 --no-external' 00:10:30.308 13:28:43 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:10:30.308 lcov: LCOV version 1.14 00:10:30.308 13:28:43 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:10:40.269 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:10:40.269 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:10:40.269 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:10:40.269 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:10:40.269 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:10:40.269 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:10:48.382 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:10:48.382 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:11:03.253 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:11:03.253 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:11:03.254 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:11:03.254 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:11:05.785 13:29:18 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:11:05.785 13:29:18 -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:05.785 13:29:18 -- common/autotest_common.sh@10 -- # set +x 00:11:05.785 13:29:18 -- spdk/autotest.sh@91 -- # rm -f 00:11:05.785 13:29:18 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:06.352 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:06.352 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:11:06.352 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:11:06.352 13:29:19 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:11:06.352 13:29:19 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:11:06.352 13:29:19 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:11:06.352 13:29:19 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:11:06.352 13:29:19 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:06.352 13:29:19 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:11:06.352 13:29:19 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:11:06.352 13:29:19 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:06.352 13:29:19 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:06.352 13:29:19 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:06.352 13:29:19 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:11:06.352 13:29:19 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:11:06.352 13:29:19 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:06.352 13:29:19 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:06.352 13:29:19 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:06.352 13:29:19 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:11:06.352 13:29:19 -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:11:06.352 13:29:19 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:11:06.352 13:29:19 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:06.352 13:29:19 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:06.352 13:29:19 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:11:06.352 13:29:19 -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:11:06.352 13:29:19 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:11:06.352 13:29:19 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:06.352 13:29:19 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:11:06.352 13:29:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:06.352 13:29:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:06.352 13:29:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:11:06.352 13:29:19 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:11:06.352 13:29:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:11:06.352 No valid GPT data, bailing 00:11:06.352 13:29:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:06.352 13:29:19 -- scripts/common.sh@391 -- # pt= 00:11:06.352 13:29:19 -- scripts/common.sh@392 -- # return 1 00:11:06.352 13:29:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:11:06.352 1+0 records in 00:11:06.352 1+0 records out 00:11:06.352 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00490989 s, 214 MB/s 00:11:06.352 13:29:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:06.352 13:29:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:06.352 13:29:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:11:06.352 13:29:19 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:11:06.352 13:29:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:11:06.352 No valid GPT data, bailing 00:11:06.352 13:29:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:11:06.352 13:29:19 -- scripts/common.sh@391 -- # pt= 00:11:06.352 13:29:19 -- scripts/common.sh@392 -- # return 1 00:11:06.352 13:29:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:11:06.352 1+0 records in 00:11:06.352 1+0 records out 00:11:06.352 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00405396 s, 259 MB/s 00:11:06.352 13:29:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:06.352 13:29:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:06.352 13:29:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:11:06.352 13:29:19 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:11:06.352 13:29:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:11:06.611 No valid GPT data, bailing 00:11:06.611 13:29:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:11:06.611 13:29:19 -- scripts/common.sh@391 -- # pt= 00:11:06.611 13:29:19 -- scripts/common.sh@392 -- # return 1 00:11:06.611 13:29:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:11:06.611 1+0 records in 00:11:06.611 1+0 records out 00:11:06.611 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00393686 s, 266 MB/s 00:11:06.611 13:29:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:06.611 13:29:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:06.611 13:29:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:11:06.611 13:29:19 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:11:06.611 13:29:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:11:06.611 No valid GPT data, bailing 00:11:06.611 13:29:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:11:06.611 13:29:19 -- scripts/common.sh@391 -- # pt= 00:11:06.611 13:29:19 -- scripts/common.sh@392 -- # return 1 00:11:06.611 13:29:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:11:06.611 1+0 records in 00:11:06.611 1+0 records out 00:11:06.611 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00540202 s, 194 MB/s 00:11:06.611 13:29:19 -- spdk/autotest.sh@118 -- # sync 00:11:06.611 13:29:19 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:11:06.611 13:29:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:11:06.611 13:29:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:11:08.511 13:29:21 -- spdk/autotest.sh@124 -- # uname -s 00:11:08.511 13:29:21 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:11:08.511 13:29:21 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:11:08.511 13:29:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:08.511 13:29:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:08.511 13:29:21 -- common/autotest_common.sh@10 -- # set +x 00:11:08.511 ************************************ 00:11:08.511 START TEST setup.sh 00:11:08.511 ************************************ 00:11:08.511 13:29:21 setup.sh -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:11:08.511 * Looking for test storage... 00:11:08.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:08.511 13:29:21 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:11:08.511 13:29:21 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:11:08.511 13:29:21 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:11:08.511 13:29:21 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:08.511 13:29:21 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:08.511 13:29:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:08.511 ************************************ 00:11:08.511 START TEST acl 00:11:08.511 ************************************ 00:11:08.511 13:29:21 setup.sh.acl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:11:08.511 * Looking for test storage... 00:11:08.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:08.511 13:29:21 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:11:08.511 13:29:21 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:11:08.511 13:29:21 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:11:08.511 13:29:21 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:11:08.511 13:29:21 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:08.511 13:29:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:11:08.511 13:29:21 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:11:08.511 13:29:21 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:08.511 13:29:21 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:08.511 13:29:21 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:08.511 13:29:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:11:08.511 13:29:21 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:11:08.512 13:29:21 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:08.512 13:29:21 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:08.512 13:29:21 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:08.512 13:29:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:11:08.512 13:29:21 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:11:08.512 13:29:21 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:11:08.512 13:29:21 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:08.512 13:29:21 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:08.512 13:29:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:11:08.512 13:29:21 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:11:08.512 13:29:21 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:11:08.512 13:29:21 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:08.512 13:29:21 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:11:08.512 13:29:21 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:11:08.512 13:29:21 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:11:08.512 13:29:21 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:11:08.512 13:29:21 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:11:08.512 13:29:21 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:08.512 13:29:21 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:09.444 13:29:22 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:11:09.444 13:29:22 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:11:09.444 13:29:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:09.444 13:29:22 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:11:09.444 13:29:22 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:11:09.444 13:29:22 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:11:10.008 13:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:11:10.008 13:29:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:11:10.008 13:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:10.008 Hugepages 00:11:10.008 node hugesize free / total 00:11:10.265 13:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:11:10.265 13:29:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:11:10.265 13:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:10.265 00:11:10.265 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:10.265 13:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:11:10.265 13:29:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:11:10.265 13:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:10.265 13:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:11:10.265 13:29:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:11:10.265 13:29:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:10.265 13:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:10.265 13:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:11:10.265 13:29:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:11:10.265 13:29:23 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:11:10.265 13:29:23 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:11:10.265 13:29:23 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:11:10.265 13:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:10.535 13:29:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:11:10.535 13:29:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:11:10.535 13:29:23 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:11:10.535 13:29:23 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:11:10.535 13:29:23 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:11:10.535 13:29:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:10.535 13:29:23 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:11:10.535 13:29:23 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:11:10.535 13:29:23 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:10.535 13:29:23 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:10.535 13:29:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:11:10.535 ************************************ 00:11:10.535 START TEST denied 00:11:10.535 ************************************ 00:11:10.535 13:29:23 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:11:10.535 13:29:23 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:11:10.535 13:29:23 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:11:10.535 13:29:23 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:11:10.535 13:29:23 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:11:10.535 13:29:23 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:11.472 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:11:11.472 13:29:24 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:11:11.472 13:29:24 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:11:11.472 13:29:24 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:11:11.472 13:29:24 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:11:11.472 13:29:24 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:11:11.472 13:29:24 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:11:11.472 13:29:24 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:11:11.472 13:29:24 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:11:11.472 13:29:24 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:11.472 13:29:24 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:12.038 00:11:12.038 real 0m1.680s 00:11:12.038 user 0m0.618s 00:11:12.038 sys 0m1.002s 00:11:12.038 13:29:25 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:12.038 ************************************ 00:11:12.038 END TEST denied 00:11:12.038 ************************************ 00:11:12.038 13:29:25 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:11:12.296 13:29:25 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:11:12.296 13:29:25 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:12.296 13:29:25 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:12.296 13:29:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:11:12.296 ************************************ 00:11:12.296 START TEST allowed 00:11:12.296 ************************************ 00:11:12.296 13:29:25 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:11:12.296 13:29:25 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:11:12.296 13:29:25 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:11:12.296 13:29:25 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:11:12.296 13:29:25 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:12.296 13:29:25 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:11:13.229 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:13.229 13:29:26 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:11:13.229 13:29:26 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:11:13.229 13:29:26 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:11:13.229 13:29:26 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:11:13.229 13:29:26 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:11:13.229 13:29:26 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:11:13.229 13:29:26 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:11:13.229 13:29:26 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:11:13.229 13:29:26 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:13.229 13:29:26 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:14.164 00:11:14.164 real 0m1.806s 00:11:14.164 user 0m0.712s 00:11:14.164 sys 0m1.088s 00:11:14.164 13:29:26 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:14.164 13:29:26 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:11:14.164 ************************************ 00:11:14.164 END TEST allowed 00:11:14.164 ************************************ 00:11:14.164 00:11:14.164 real 0m5.606s 00:11:14.164 user 0m2.259s 00:11:14.164 sys 0m3.314s 00:11:14.164 13:29:27 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:14.164 13:29:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:11:14.164 ************************************ 00:11:14.164 END TEST acl 00:11:14.164 ************************************ 00:11:14.164 13:29:27 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:11:14.164 13:29:27 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:14.164 13:29:27 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:14.164 13:29:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:14.164 ************************************ 00:11:14.164 START TEST hugepages 00:11:14.164 ************************************ 00:11:14.164 13:29:27 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:11:14.164 * Looking for test storage... 00:11:14.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 4613740 kB' 'MemAvailable: 7425532 kB' 'Buffers: 3456 kB' 'Cached: 3007904 kB' 'SwapCached: 0 kB' 'Active: 425704 kB' 'Inactive: 2691160 kB' 'Active(anon): 105304 kB' 'Inactive(anon): 10692 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680468 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 105460 kB' 'Mapped: 48860 kB' 'Shmem: 10492 kB' 'KReclaimable: 95428 kB' 'Slab: 173524 kB' 'SReclaimable: 95428 kB' 'SUnreclaim: 78096 kB' 'KernelStack: 4736 kB' 'PageTables: 3384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12407572 kB' 'Committed_AS: 327496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53376 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.164 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.165 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:11:14.166 13:29:27 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:11:14.166 13:29:27 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:14.166 13:29:27 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:14.166 13:29:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:14.166 ************************************ 00:11:14.166 START TEST default_setup 00:11:14.166 ************************************ 00:11:14.166 13:29:27 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:11:14.166 13:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:11:14.166 13:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:11:14.166 13:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:11:14.166 13:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:11:14.166 13:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:11:14.166 13:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:11:14.166 13:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:14.166 13:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:11:14.166 13:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:11:14.166 13:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:11:14.166 13:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:11:14.166 13:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:11:14.166 13:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:14.166 13:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:14.166 13:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:14.166 13:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:11:14.166 13:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:11:14.166 13:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:11:14.166 13:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:11:14.166 13:29:27 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:11:14.166 13:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:11:14.166 13:29:27 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:15.099 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:15.099 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:15.099 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:15.099 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:11:15.099 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:11:15.099 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:11:15.099 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:11:15.099 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:11:15.099 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:11:15.099 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:11:15.099 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:15.099 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:15.099 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:15.099 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:11:15.099 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:15.099 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:15.099 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:15.099 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:15.099 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:15.099 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6700600 kB' 'MemAvailable: 9512268 kB' 'Buffers: 3456 kB' 'Cached: 3007892 kB' 'SwapCached: 0 kB' 'Active: 442520 kB' 'Inactive: 2691152 kB' 'Active(anon): 122120 kB' 'Inactive(anon): 10672 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680480 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122144 kB' 'Mapped: 49028 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173388 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78232 kB' 'KernelStack: 4800 kB' 'PageTables: 3552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 344224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53344 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.363 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.364 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6700600 kB' 'MemAvailable: 9512268 kB' 'Buffers: 3456 kB' 'Cached: 3007892 kB' 'SwapCached: 0 kB' 'Active: 442360 kB' 'Inactive: 2691144 kB' 'Active(anon): 121960 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680480 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122180 kB' 'Mapped: 49012 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173372 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78216 kB' 'KernelStack: 4704 kB' 'PageTables: 3332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 344224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53312 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.365 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6706336 kB' 'MemAvailable: 9518004 kB' 'Buffers: 3456 kB' 'Cached: 3007892 kB' 'SwapCached: 0 kB' 'Active: 442224 kB' 'Inactive: 2691136 kB' 'Active(anon): 121824 kB' 'Inactive(anon): 10656 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680480 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 122088 kB' 'Mapped: 48896 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173368 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78212 kB' 'KernelStack: 4720 kB' 'PageTables: 3360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 344224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53312 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.366 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.367 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:15.368 nr_hugepages=1024 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:15.368 resv_hugepages=0 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:15.368 surplus_hugepages=0 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:15.368 anon_hugepages=0 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.368 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6705836 kB' 'MemAvailable: 9517504 kB' 'Buffers: 3456 kB' 'Cached: 3007892 kB' 'SwapCached: 0 kB' 'Active: 442136 kB' 'Inactive: 2691136 kB' 'Active(anon): 121736 kB' 'Inactive(anon): 10656 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680480 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 121948 kB' 'Mapped: 48896 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173360 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78204 kB' 'KernelStack: 4688 kB' 'PageTables: 3284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 344224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53312 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.369 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6705836 kB' 'MemUsed: 5526404 kB' 'SwapCached: 0 kB' 'Active: 442068 kB' 'Inactive: 2691136 kB' 'Active(anon): 121668 kB' 'Inactive(anon): 10656 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680480 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 3011348 kB' 'Mapped: 48896 kB' 'AnonPages: 121880 kB' 'Shmem: 10468 kB' 'KernelStack: 4672 kB' 'PageTables: 3248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95156 kB' 'Slab: 173360 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:15.370 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.661 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:11:15.662 node0=1024 expecting 1024 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:11:15.662 00:11:15.662 real 0m1.277s 00:11:15.662 user 0m0.535s 00:11:15.662 sys 0m0.615s 00:11:15.662 13:29:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:15.663 13:29:28 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:11:15.663 ************************************ 00:11:15.663 END TEST default_setup 00:11:15.663 ************************************ 00:11:15.663 13:29:28 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:11:15.663 13:29:28 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:15.663 13:29:28 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:15.663 13:29:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:15.663 ************************************ 00:11:15.663 START TEST per_node_1G_alloc 00:11:15.663 ************************************ 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:15.663 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:15.923 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:15.923 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:15.923 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 7761152 kB' 'MemAvailable: 10572820 kB' 'Buffers: 3456 kB' 'Cached: 3007892 kB' 'SwapCached: 0 kB' 'Active: 442572 kB' 'Inactive: 2691144 kB' 'Active(anon): 122172 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680480 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 122420 kB' 'Mapped: 49256 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173348 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78192 kB' 'KernelStack: 4776 kB' 'PageTables: 3716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 344224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53392 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.923 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.924 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 7761152 kB' 'MemAvailable: 10572820 kB' 'Buffers: 3456 kB' 'Cached: 3007892 kB' 'SwapCached: 0 kB' 'Active: 442400 kB' 'Inactive: 2691136 kB' 'Active(anon): 122000 kB' 'Inactive(anon): 10656 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680480 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 122208 kB' 'Mapped: 49116 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173356 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78200 kB' 'KernelStack: 4788 kB' 'PageTables: 3656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 344224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53376 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.925 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:28 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.926 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.927 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.927 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.927 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.927 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.927 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.927 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.927 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.927 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.927 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.927 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.927 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.927 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.927 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:15.927 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:15.927 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:15.927 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:15.927 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:16.188 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 7761152 kB' 'MemAvailable: 10572820 kB' 'Buffers: 3456 kB' 'Cached: 3007892 kB' 'SwapCached: 0 kB' 'Active: 442432 kB' 'Inactive: 2691136 kB' 'Active(anon): 122032 kB' 'Inactive(anon): 10656 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680480 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 122296 kB' 'Mapped: 49116 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173356 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78200 kB' 'KernelStack: 4788 kB' 'PageTables: 3660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 344224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53392 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.189 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:11:16.190 nr_hugepages=512 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:16.190 resv_hugepages=0 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:16.190 surplus_hugepages=0 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:16.190 anon_hugepages=0 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:16.190 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 7761152 kB' 'MemAvailable: 10572820 kB' 'Buffers: 3456 kB' 'Cached: 3007892 kB' 'SwapCached: 0 kB' 'Active: 442308 kB' 'Inactive: 2691136 kB' 'Active(anon): 121908 kB' 'Inactive(anon): 10656 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680480 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 122116 kB' 'Mapped: 49116 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173348 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78192 kB' 'KernelStack: 4740 kB' 'PageTables: 3544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 344224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53376 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.191 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 7761152 kB' 'MemUsed: 4471088 kB' 'SwapCached: 0 kB' 'Active: 442484 kB' 'Inactive: 2691144 kB' 'Active(anon): 122084 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680480 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'FilePages: 3011348 kB' 'Mapped: 49124 kB' 'AnonPages: 122336 kB' 'Shmem: 10468 kB' 'KernelStack: 4756 kB' 'PageTables: 3584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95156 kB' 'Slab: 173348 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.192 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.193 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:11:16.194 node0=512 expecting 512 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:11:16.194 00:11:16.194 real 0m0.698s 00:11:16.194 user 0m0.282s 00:11:16.194 sys 0m0.341s 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:16.194 13:29:29 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:16.194 ************************************ 00:11:16.194 END TEST per_node_1G_alloc 00:11:16.194 ************************************ 00:11:16.194 13:29:29 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:11:16.194 13:29:29 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:16.194 13:29:29 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:16.194 13:29:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:16.452 ************************************ 00:11:16.452 START TEST even_2G_alloc 00:11:16.452 ************************************ 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:16.452 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:16.712 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:16.712 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:16.712 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:16.712 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:11:16.712 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6705028 kB' 'MemAvailable: 9516696 kB' 'Buffers: 3456 kB' 'Cached: 3007892 kB' 'SwapCached: 0 kB' 'Active: 443008 kB' 'Inactive: 2691136 kB' 'Active(anon): 122608 kB' 'Inactive(anon): 10656 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680480 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 122900 kB' 'Mapped: 49164 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173296 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78140 kB' 'KernelStack: 4828 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 343856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53360 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.713 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6704776 kB' 'MemAvailable: 9516448 kB' 'Buffers: 3456 kB' 'Cached: 3007896 kB' 'SwapCached: 0 kB' 'Active: 442368 kB' 'Inactive: 2691140 kB' 'Active(anon): 121968 kB' 'Inactive(anon): 10656 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 122252 kB' 'Mapped: 48904 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173340 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78184 kB' 'KernelStack: 4712 kB' 'PageTables: 3312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 344224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53344 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.714 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.715 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6704776 kB' 'MemAvailable: 9516448 kB' 'Buffers: 3456 kB' 'Cached: 3007896 kB' 'SwapCached: 0 kB' 'Active: 442612 kB' 'Inactive: 2691140 kB' 'Active(anon): 122212 kB' 'Inactive(anon): 10656 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 122492 kB' 'Mapped: 48904 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173340 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78184 kB' 'KernelStack: 4712 kB' 'PageTables: 3312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 344224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53360 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.716 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.978 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.979 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:16.980 nr_hugepages=1024 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:16.980 resv_hugepages=0 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:16.980 surplus_hugepages=0 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:16.980 anon_hugepages=0 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6704776 kB' 'MemAvailable: 9516448 kB' 'Buffers: 3456 kB' 'Cached: 3007896 kB' 'SwapCached: 0 kB' 'Active: 442368 kB' 'Inactive: 2691140 kB' 'Active(anon): 121968 kB' 'Inactive(anon): 10656 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 122252 kB' 'Mapped: 48904 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173340 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78184 kB' 'KernelStack: 4712 kB' 'PageTables: 3312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 344224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53360 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.980 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.981 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.982 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6704776 kB' 'MemUsed: 5527464 kB' 'SwapCached: 0 kB' 'Active: 442256 kB' 'Inactive: 2691140 kB' 'Active(anon): 121856 kB' 'Inactive(anon): 10656 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'FilePages: 3011352 kB' 'Mapped: 48904 kB' 'AnonPages: 122416 kB' 'Shmem: 10468 kB' 'KernelStack: 4712 kB' 'PageTables: 3316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95156 kB' 'Slab: 173340 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.983 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:11:16.984 node0=1024 expecting 1024 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:11:16.984 00:11:16.984 real 0m0.665s 00:11:16.984 user 0m0.300s 00:11:16.984 sys 0m0.305s 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:16.984 13:29:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:16.984 ************************************ 00:11:16.984 END TEST even_2G_alloc 00:11:16.984 ************************************ 00:11:16.984 13:29:29 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:11:16.984 13:29:29 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:16.984 13:29:29 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:16.984 13:29:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:16.984 ************************************ 00:11:16.984 START TEST odd_alloc 00:11:16.984 ************************************ 00:11:16.984 13:29:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:11:16.984 13:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:11:16.984 13:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:11:16.985 13:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:11:16.985 13:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:16.985 13:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:11:16.985 13:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:11:16.985 13:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:16.985 13:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:16.985 13:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:11:16.985 13:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:16.985 13:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:16.985 13:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:16.985 13:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:16.985 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:11:16.985 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:16.985 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:11:16.985 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:11:16.985 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:11:16.985 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:16.985 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:11:16.985 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:11:16.985 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:11:16.985 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:16.985 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:17.242 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:17.535 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:17.535 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:17.535 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:11:17.535 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:11:17.535 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:17.535 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:17.535 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:17.535 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:17.535 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:17.535 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:17.535 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:17.535 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:17.535 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:11:17.535 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:17.535 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6698700 kB' 'MemAvailable: 9510372 kB' 'Buffers: 3456 kB' 'Cached: 3007896 kB' 'SwapCached: 0 kB' 'Active: 442788 kB' 'Inactive: 2691148 kB' 'Active(anon): 122388 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 122648 kB' 'Mapped: 49020 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173324 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78168 kB' 'KernelStack: 4692 kB' 'PageTables: 3400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13455124 kB' 'Committed_AS: 344224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53424 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.536 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6698700 kB' 'MemAvailable: 9510372 kB' 'Buffers: 3456 kB' 'Cached: 3007896 kB' 'SwapCached: 0 kB' 'Active: 442340 kB' 'Inactive: 2691140 kB' 'Active(anon): 121940 kB' 'Inactive(anon): 10656 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 122200 kB' 'Mapped: 48896 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173332 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78176 kB' 'KernelStack: 4736 kB' 'PageTables: 3396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13455124 kB' 'Committed_AS: 344224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53408 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.537 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.538 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:17.539 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6698700 kB' 'MemAvailable: 9510372 kB' 'Buffers: 3456 kB' 'Cached: 3007896 kB' 'SwapCached: 0 kB' 'Active: 442304 kB' 'Inactive: 2691140 kB' 'Active(anon): 121904 kB' 'Inactive(anon): 10656 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 122192 kB' 'Mapped: 48896 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173324 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78168 kB' 'KernelStack: 4736 kB' 'PageTables: 3396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13455124 kB' 'Committed_AS: 344224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53376 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.540 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.541 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:17.542 nr_hugepages=1025 00:11:17.542 resv_hugepages=0 00:11:17.542 surplus_hugepages=0 00:11:17.542 anon_hugepages=0 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6698700 kB' 'MemAvailable: 9510372 kB' 'Buffers: 3456 kB' 'Cached: 3007896 kB' 'SwapCached: 0 kB' 'Active: 442220 kB' 'Inactive: 2691140 kB' 'Active(anon): 121820 kB' 'Inactive(anon): 10656 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'AnonPages: 122072 kB' 'Mapped: 48896 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173324 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78168 kB' 'KernelStack: 4720 kB' 'PageTables: 3356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13455124 kB' 'Committed_AS: 344224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53360 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.542 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.543 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.819 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.819 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6698700 kB' 'MemUsed: 5533540 kB' 'SwapCached: 0 kB' 'Active: 442300 kB' 'Inactive: 2691140 kB' 'Active(anon): 121900 kB' 'Inactive(anon): 10656 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 272 kB' 'Writeback: 0 kB' 'FilePages: 3011352 kB' 'Mapped: 48896 kB' 'AnonPages: 122196 kB' 'Shmem: 10468 kB' 'KernelStack: 4736 kB' 'PageTables: 3396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95156 kB' 'Slab: 173324 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78168 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.820 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:17.821 node0=1025 expecting 1025 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:11:17.821 00:11:17.821 real 0m0.651s 00:11:17.821 user 0m0.284s 00:11:17.821 sys 0m0.316s 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:17.821 13:29:30 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:17.821 ************************************ 00:11:17.821 END TEST odd_alloc 00:11:17.821 ************************************ 00:11:17.821 13:29:30 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:11:17.821 13:29:30 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:17.821 13:29:30 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:17.821 13:29:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:17.821 ************************************ 00:11:17.821 START TEST custom_alloc 00:11:17.821 ************************************ 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:17.822 13:29:30 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:18.084 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:18.084 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:18.084 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:18.084 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:11:18.084 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:11:18.084 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 7752524 kB' 'MemAvailable: 10564196 kB' 'Buffers: 3456 kB' 'Cached: 3007896 kB' 'SwapCached: 0 kB' 'Active: 442328 kB' 'Inactive: 2691148 kB' 'Active(anon): 121928 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 122428 kB' 'Mapped: 49032 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173264 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78108 kB' 'KernelStack: 4756 kB' 'PageTables: 3104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 344352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53408 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.085 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 7752272 kB' 'MemAvailable: 10563944 kB' 'Buffers: 3456 kB' 'Cached: 3007896 kB' 'SwapCached: 0 kB' 'Active: 442068 kB' 'Inactive: 2691148 kB' 'Active(anon): 121668 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 122168 kB' 'Mapped: 49008 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173380 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78224 kB' 'KernelStack: 4708 kB' 'PageTables: 3176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 344352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53376 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.086 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.087 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 7752272 kB' 'MemAvailable: 10563944 kB' 'Buffers: 3456 kB' 'Cached: 3007896 kB' 'SwapCached: 0 kB' 'Active: 441972 kB' 'Inactive: 2691140 kB' 'Active(anon): 121572 kB' 'Inactive(anon): 10656 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 122020 kB' 'Mapped: 48904 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173380 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78224 kB' 'KernelStack: 4688 kB' 'PageTables: 3284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 344352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53376 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.088 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.089 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:18.090 nr_hugepages=512 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:11:18.090 resv_hugepages=0 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:18.090 surplus_hugepages=0 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:18.090 anon_hugepages=0 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 7752024 kB' 'MemAvailable: 10563696 kB' 'Buffers: 3456 kB' 'Cached: 3007896 kB' 'SwapCached: 0 kB' 'Active: 442452 kB' 'Inactive: 2691148 kB' 'Active(anon): 122052 kB' 'Inactive(anon): 10664 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'AnonPages: 122200 kB' 'Mapped: 49112 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173380 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78224 kB' 'KernelStack: 4768 kB' 'PageTables: 3464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13980436 kB' 'Committed_AS: 344220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53376 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.090 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.351 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.352 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 7752024 kB' 'MemUsed: 4480216 kB' 'SwapCached: 0 kB' 'Active: 442036 kB' 'Inactive: 2691140 kB' 'Active(anon): 121636 kB' 'Inactive(anon): 10656 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 280 kB' 'Writeback: 0 kB' 'FilePages: 3011352 kB' 'Mapped: 49052 kB' 'AnonPages: 121916 kB' 'Shmem: 10468 kB' 'KernelStack: 4720 kB' 'PageTables: 3356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95156 kB' 'Slab: 173368 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.353 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:18.354 node0=512 expecting 512 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:11:18.354 00:11:18.354 real 0m0.553s 00:11:18.354 user 0m0.300s 00:11:18.354 sys 0m0.290s 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:18.354 13:29:31 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:18.354 ************************************ 00:11:18.354 END TEST custom_alloc 00:11:18.354 ************************************ 00:11:18.354 13:29:31 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:11:18.354 13:29:31 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:18.354 13:29:31 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:18.354 13:29:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:18.354 ************************************ 00:11:18.354 START TEST no_shrink_alloc 00:11:18.354 ************************************ 00:11:18.354 13:29:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:11:18.354 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:11:18.354 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:11:18.354 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:11:18.354 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:11:18.354 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:11:18.354 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:11:18.354 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:18.354 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:11:18.354 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:11:18.354 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:11:18.354 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:18.354 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:11:18.354 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:18.354 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:18.354 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:18.354 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:11:18.354 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:11:18.354 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:11:18.354 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:11:18.354 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:11:18.354 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:18.354 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:18.617 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:18.617 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:18.617 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6708900 kB' 'MemAvailable: 9520572 kB' 'Buffers: 3456 kB' 'Cached: 3007896 kB' 'SwapCached: 0 kB' 'Active: 443360 kB' 'Inactive: 2691124 kB' 'Active(anon): 122960 kB' 'Inactive(anon): 10640 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 284 kB' 'Writeback: 0 kB' 'AnonPages: 123316 kB' 'Mapped: 49404 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173452 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78296 kB' 'KernelStack: 4836 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 344352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53408 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.617 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6708648 kB' 'MemAvailable: 9520320 kB' 'Buffers: 3456 kB' 'Cached: 3007896 kB' 'SwapCached: 0 kB' 'Active: 442192 kB' 'Inactive: 2691116 kB' 'Active(anon): 121792 kB' 'Inactive(anon): 10632 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 122120 kB' 'Mapped: 49128 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173428 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78272 kB' 'KernelStack: 4740 kB' 'PageTables: 3460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 344352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53392 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.618 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.619 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6708144 kB' 'MemAvailable: 9519816 kB' 'Buffers: 3456 kB' 'Cached: 3007896 kB' 'SwapCached: 0 kB' 'Active: 437568 kB' 'Inactive: 2691116 kB' 'Active(anon): 117168 kB' 'Inactive(anon): 10632 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 117468 kB' 'Mapped: 48132 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173356 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78200 kB' 'KernelStack: 4676 kB' 'PageTables: 3160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 326408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53296 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.620 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.621 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.882 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:18.883 nr_hugepages=1024 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:18.883 resv_hugepages=0 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:18.883 surplus_hugepages=0 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:18.883 anon_hugepages=0 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.883 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6708144 kB' 'MemAvailable: 9519816 kB' 'Buffers: 3456 kB' 'Cached: 3007896 kB' 'SwapCached: 0 kB' 'Active: 437816 kB' 'Inactive: 2691116 kB' 'Active(anon): 117416 kB' 'Inactive(anon): 10632 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 117764 kB' 'Mapped: 48392 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173292 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78136 kB' 'KernelStack: 4676 kB' 'PageTables: 3116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 326040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53296 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.884 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6708144 kB' 'MemUsed: 5524096 kB' 'SwapCached: 0 kB' 'Active: 437560 kB' 'Inactive: 2691108 kB' 'Active(anon): 117160 kB' 'Inactive(anon): 10624 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'FilePages: 3011352 kB' 'Mapped: 48124 kB' 'AnonPages: 117216 kB' 'Shmem: 10468 kB' 'KernelStack: 4644 kB' 'PageTables: 3032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95156 kB' 'Slab: 173276 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.885 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.886 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:18.887 node0=1024 expecting 1024 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:18.887 13:29:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:19.150 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:19.150 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:19.150 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:19.150 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6706384 kB' 'MemAvailable: 9518056 kB' 'Buffers: 3456 kB' 'Cached: 3007896 kB' 'SwapCached: 0 kB' 'Active: 438052 kB' 'Inactive: 2691132 kB' 'Active(anon): 117652 kB' 'Inactive(anon): 10648 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 117936 kB' 'Mapped: 48116 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173212 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78056 kB' 'KernelStack: 4656 kB' 'PageTables: 3112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 326408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53344 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.150 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:19.151 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6706384 kB' 'MemAvailable: 9518056 kB' 'Buffers: 3456 kB' 'Cached: 3007896 kB' 'SwapCached: 0 kB' 'Active: 437960 kB' 'Inactive: 2691132 kB' 'Active(anon): 117560 kB' 'Inactive(anon): 10648 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 292 kB' 'Writeback: 0 kB' 'AnonPages: 118144 kB' 'Mapped: 48056 kB' 'Shmem: 10468 kB' 'KReclaimable: 95156 kB' 'Slab: 173212 kB' 'SReclaimable: 95156 kB' 'SUnreclaim: 78056 kB' 'KernelStack: 4640 kB' 'PageTables: 3072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 326408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53312 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.152 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6706132 kB' 'MemAvailable: 9517796 kB' 'Buffers: 3456 kB' 'Cached: 3007892 kB' 'SwapCached: 0 kB' 'Active: 437344 kB' 'Inactive: 2691128 kB' 'Active(anon): 116944 kB' 'Inactive(anon): 10648 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680480 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 117516 kB' 'Mapped: 48104 kB' 'Shmem: 10468 kB' 'KReclaimable: 95152 kB' 'Slab: 173084 kB' 'SReclaimable: 95152 kB' 'SUnreclaim: 77932 kB' 'KernelStack: 4640 kB' 'PageTables: 2952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 326408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53264 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.153 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.154 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:19.155 nr_hugepages=1024 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:19.155 resv_hugepages=0 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:19.155 surplus_hugepages=0 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:19.155 anon_hugepages=0 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.155 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6706384 kB' 'MemAvailable: 9518052 kB' 'Buffers: 3456 kB' 'Cached: 3007896 kB' 'SwapCached: 0 kB' 'Active: 437440 kB' 'Inactive: 2691140 kB' 'Active(anon): 117040 kB' 'Inactive(anon): 10656 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'AnonPages: 117348 kB' 'Mapped: 47904 kB' 'Shmem: 10468 kB' 'KReclaimable: 95152 kB' 'Slab: 173080 kB' 'SReclaimable: 95152 kB' 'SUnreclaim: 77928 kB' 'KernelStack: 4608 kB' 'PageTables: 2868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13456148 kB' 'Committed_AS: 326408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 53248 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 7155712 kB' 'DirectMap1G: 7340032 kB' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.156 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.157 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12232240 kB' 'MemFree: 6706384 kB' 'MemUsed: 5525856 kB' 'SwapCached: 0 kB' 'Active: 437456 kB' 'Inactive: 2691140 kB' 'Active(anon): 117056 kB' 'Inactive(anon): 10656 kB' 'Active(file): 320400 kB' 'Inactive(file): 2680484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 296 kB' 'Writeback: 0 kB' 'FilePages: 3011352 kB' 'Mapped: 47904 kB' 'AnonPages: 117360 kB' 'Shmem: 10468 kB' 'KernelStack: 4608 kB' 'PageTables: 2868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95152 kB' 'Slab: 173080 kB' 'SReclaimable: 95152 kB' 'SUnreclaim: 77928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.417 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:19.418 node0=1024 expecting 1024 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:11:19.418 13:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:11:19.418 00:11:19.419 real 0m0.983s 00:11:19.419 user 0m0.474s 00:11:19.419 sys 0m0.583s 00:11:19.419 13:29:32 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:19.419 13:29:32 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:19.419 ************************************ 00:11:19.419 END TEST no_shrink_alloc 00:11:19.419 ************************************ 00:11:19.419 13:29:32 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:11:19.419 13:29:32 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:11:19.419 13:29:32 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:11:19.419 13:29:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:19.419 13:29:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:19.419 13:29:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:19.419 13:29:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:19.419 13:29:32 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:11:19.419 13:29:32 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:11:19.419 00:11:19.419 real 0m5.267s 00:11:19.419 user 0m2.327s 00:11:19.419 sys 0m2.731s 00:11:19.419 13:29:32 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:19.419 13:29:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:19.419 ************************************ 00:11:19.419 END TEST hugepages 00:11:19.419 ************************************ 00:11:19.419 13:29:32 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:11:19.419 13:29:32 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:19.419 13:29:32 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:19.419 13:29:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:19.419 ************************************ 00:11:19.419 START TEST driver 00:11:19.419 ************************************ 00:11:19.419 13:29:32 setup.sh.driver -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:11:19.419 * Looking for test storage... 00:11:19.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:19.419 13:29:32 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:11:19.419 13:29:32 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:19.419 13:29:32 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:19.986 13:29:33 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:11:19.986 13:29:33 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:19.986 13:29:33 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:19.986 13:29:33 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:11:19.986 ************************************ 00:11:19.986 START TEST guess_driver 00:11:19.986 ************************************ 00:11:19.986 13:29:33 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:11:19.986 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:11:19.986 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:11:19.986 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:11:19.986 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:11:19.986 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:11:19.986 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:11:19.986 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:11:19.986 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:11:19.986 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:11:19.986 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:11:19.986 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:11:19.986 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:11:19.986 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:11:19.986 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:11:19.986 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:11:19.986 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:11:19.986 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.5.12-200.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:11:19.986 insmod /lib/modules/6.5.12-200.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:11:19.987 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:11:19.987 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:11:19.987 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:11:19.987 Looking for driver=uio_pci_generic 00:11:19.987 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:11:19.987 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:19.987 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:11:19.987 13:29:33 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:11:19.987 13:29:33 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:20.919 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:11:20.919 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:11:20.919 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:20.919 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:20.919 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:11:20.919 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:20.919 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:20.919 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:11:20.919 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:20.919 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:11:20.919 13:29:33 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:11:20.919 13:29:33 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:20.919 13:29:33 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:21.484 00:11:21.484 real 0m1.508s 00:11:21.484 user 0m0.530s 00:11:21.484 sys 0m0.993s 00:11:21.484 13:29:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:21.484 13:29:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:11:21.484 ************************************ 00:11:21.484 END TEST guess_driver 00:11:21.484 ************************************ 00:11:21.484 00:11:21.484 real 0m2.206s 00:11:21.484 user 0m0.732s 00:11:21.484 sys 0m1.549s 00:11:21.484 13:29:34 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:21.484 ************************************ 00:11:21.484 END TEST driver 00:11:21.484 13:29:34 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:11:21.484 ************************************ 00:11:21.741 13:29:34 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:11:21.741 13:29:34 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:21.741 13:29:34 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:21.741 13:29:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:21.742 ************************************ 00:11:21.742 START TEST devices 00:11:21.742 ************************************ 00:11:21.742 13:29:34 setup.sh.devices -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:11:21.742 * Looking for test storage... 00:11:21.742 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:21.742 13:29:34 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:11:21.742 13:29:34 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:11:21.742 13:29:34 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:21.742 13:29:34 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:22.675 13:29:35 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:11:22.675 13:29:35 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:11:22.675 13:29:35 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:11:22.675 13:29:35 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:11:22.675 13:29:35 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:22.676 13:29:35 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:11:22.676 13:29:35 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:11:22.676 13:29:35 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:22.676 13:29:35 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:22.676 13:29:35 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:22.676 13:29:35 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:11:22.676 13:29:35 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:11:22.676 13:29:35 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:11:22.676 13:29:35 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:22.676 13:29:35 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:22.676 13:29:35 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n3 00:11:22.676 13:29:35 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:11:22.676 13:29:35 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:11:22.676 13:29:35 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:22.676 13:29:35 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:22.676 13:29:35 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:11:22.676 13:29:35 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:11:22.676 13:29:35 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:22.676 13:29:35 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:11:22.676 13:29:35 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:11:22.676 13:29:35 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:11:22.676 No valid GPT data, bailing 00:11:22.676 13:29:35 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:22.676 13:29:35 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:11:22.676 13:29:35 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:11:22.676 13:29:35 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:22.676 13:29:35 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:22.676 13:29:35 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:11:22.676 13:29:35 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:11:22.676 13:29:35 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:11:22.676 No valid GPT data, bailing 00:11:22.676 13:29:35 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:11:22.676 13:29:35 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:11:22.676 13:29:35 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:11:22.676 13:29:35 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:11:22.676 13:29:35 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:11:22.676 13:29:35 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:11:22.676 13:29:35 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:11:22.676 13:29:35 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:11:22.676 No valid GPT data, bailing 00:11:22.676 13:29:35 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:11:22.676 13:29:35 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:11:22.676 13:29:35 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:11:22.676 13:29:35 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:11:22.676 13:29:35 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:11:22.676 13:29:35 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:11:22.676 13:29:35 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:11:22.676 13:29:35 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:11:22.676 13:29:35 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:11:22.676 No valid GPT data, bailing 00:11:22.934 13:29:35 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:11:22.934 13:29:35 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:11:22.934 13:29:35 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:11:22.934 13:29:35 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:11:22.934 13:29:35 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:11:22.934 13:29:35 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:11:22.934 13:29:35 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:11:22.934 13:29:35 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:11:22.934 13:29:35 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:11:22.934 13:29:35 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:11:22.934 13:29:35 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:11:22.934 13:29:35 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:11:22.934 13:29:35 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:11:22.934 13:29:35 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:22.934 13:29:35 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:22.934 13:29:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:11:22.934 ************************************ 00:11:22.934 START TEST nvme_mount 00:11:22.934 ************************************ 00:11:22.934 13:29:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:11:22.934 13:29:35 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:11:22.934 13:29:35 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:11:22.934 13:29:35 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:22.935 13:29:35 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:22.935 13:29:35 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:11:22.935 13:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:11:22.935 13:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:11:22.935 13:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:11:22.935 13:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:11:22.935 13:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:11:22.935 13:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:11:22.935 13:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:11:22.935 13:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:22.935 13:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:11:22.935 13:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:11:22.935 13:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:22.935 13:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:11:22.935 13:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:11:22.935 13:29:35 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:11:23.866 Creating new GPT entries in memory. 00:11:23.866 GPT data structures destroyed! You may now partition the disk using fdisk or 00:11:23.866 other utilities. 00:11:23.866 13:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:11:23.866 13:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:23.866 13:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:23.866 13:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:23.866 13:29:36 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:11:24.839 Creating new GPT entries in memory. 00:11:24.839 The operation has completed successfully. 00:11:24.839 13:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:11:24.839 13:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:24.839 13:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 70134 00:11:24.839 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:24.839 13:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:11:24.839 13:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:24.839 13:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:11:24.839 13:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:11:24.839 13:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:24.839 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:24.839 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:24.839 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:11:24.839 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:24.839 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:24.839 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:11:24.839 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:24.839 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:11:24.839 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:11:24.839 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:24.839 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:24.839 13:29:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:11:24.839 13:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:24.839 13:29:37 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:25.096 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:25.096 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:11:25.096 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:11:25.096 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:25.096 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:25.096 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:25.353 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:25.353 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:25.353 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:25.353 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:25.353 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:25.353 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:11:25.353 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:25.353 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:25.353 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:25.353 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:11:25.353 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:25.353 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:25.353 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:25.353 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:11:25.353 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:11:25.353 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:11:25.353 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:11:25.919 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:25.919 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:11:25.919 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:25.919 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:25.919 13:29:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.177 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:26.177 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.177 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:26.177 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.177 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:26.177 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:11:26.177 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:26.177 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:26.177 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:26.177 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:26.177 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:11:26.177 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:26.177 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:11:26.177 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:11:26.177 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:11:26.177 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:11:26.177 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:11:26.177 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:11:26.177 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.177 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:26.177 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:11:26.177 13:29:39 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:26.177 13:29:39 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:26.435 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:26.435 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:11:26.435 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:11:26.435 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.435 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:26.435 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.692 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:26.692 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.692 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:26.692 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:26.692 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:26.692 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:11:26.692 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:11:26.692 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:11:26.692 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:26.949 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:26.949 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:11:26.949 13:29:39 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:11:26.950 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:11:26.950 00:11:26.950 real 0m3.991s 00:11:26.950 user 0m0.699s 00:11:26.950 sys 0m1.047s 00:11:26.950 13:29:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:26.950 13:29:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:11:26.950 ************************************ 00:11:26.950 END TEST nvme_mount 00:11:26.950 ************************************ 00:11:26.950 13:29:39 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:11:26.950 13:29:39 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:26.950 13:29:39 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:26.950 13:29:39 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:11:26.950 ************************************ 00:11:26.950 START TEST dm_mount 00:11:26.950 ************************************ 00:11:26.950 13:29:39 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:11:26.950 13:29:39 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:11:26.950 13:29:39 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:11:26.950 13:29:39 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:11:26.950 13:29:39 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:11:26.950 13:29:39 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:11:26.950 13:29:39 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:11:26.950 13:29:39 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:11:26.950 13:29:39 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:11:26.950 13:29:39 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:11:26.950 13:29:39 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:11:26.950 13:29:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:11:26.950 13:29:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:26.950 13:29:39 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:11:26.950 13:29:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:11:26.950 13:29:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:26.950 13:29:39 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:11:26.950 13:29:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:11:26.950 13:29:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:26.950 13:29:39 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:11:26.950 13:29:39 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:11:26.950 13:29:39 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:11:27.898 Creating new GPT entries in memory. 00:11:27.898 GPT data structures destroyed! You may now partition the disk using fdisk or 00:11:27.898 other utilities. 00:11:27.899 13:29:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:11:27.899 13:29:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:27.899 13:29:40 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:27.899 13:29:40 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:27.899 13:29:40 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:11:28.847 Creating new GPT entries in memory. 00:11:28.847 The operation has completed successfully. 00:11:28.847 13:29:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:11:28.847 13:29:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:28.847 13:29:41 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:28.847 13:29:41 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:28.847 13:29:41 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:11:30.220 The operation has completed successfully. 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 70567 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:11:30.220 13:29:42 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:11:30.220 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.220 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:30.220 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:11:30.220 13:29:43 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:30.220 13:29:43 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:30.220 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:30.220 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:11:30.220 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:11:30.220 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.220 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:30.220 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.478 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:30.478 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.478 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:30.478 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.478 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:30.478 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:11:30.478 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:30.478 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:11:30.478 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:30.478 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:30.478 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:11:30.478 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:30.478 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:11:30.478 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:11:30.478 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:11:30.478 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:11:30.478 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:11:30.478 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:11:30.478 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.478 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:30.478 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:11:30.478 13:29:43 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:30.478 13:29:43 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:30.736 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:30.736 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:11:30.736 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:11:30.736 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.736 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:30.736 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.993 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:30.993 13:29:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:30.993 13:29:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:30.993 13:29:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:31.250 13:29:44 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:31.250 13:29:44 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:11:31.250 13:29:44 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:11:31.250 13:29:44 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:11:31.250 13:29:44 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:31.250 13:29:44 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:11:31.250 13:29:44 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:11:31.250 13:29:44 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:31.250 13:29:44 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:11:31.250 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:11:31.250 13:29:44 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:11:31.250 13:29:44 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:11:31.250 00:11:31.250 real 0m4.341s 00:11:31.250 user 0m0.498s 00:11:31.250 sys 0m0.822s 00:11:31.250 13:29:44 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:31.250 13:29:44 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:11:31.250 ************************************ 00:11:31.250 END TEST dm_mount 00:11:31.250 ************************************ 00:11:31.250 13:29:44 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:11:31.250 13:29:44 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:11:31.250 13:29:44 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:31.250 13:29:44 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:31.250 13:29:44 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:11:31.250 13:29:44 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:11:31.250 13:29:44 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:11:31.508 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:31.508 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:11:31.508 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:31.508 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:31.508 13:29:44 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:11:31.508 13:29:44 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:31.508 13:29:44 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:11:31.508 13:29:44 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:31.508 13:29:44 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:11:31.508 13:29:44 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:11:31.508 13:29:44 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:11:31.508 00:11:31.508 real 0m9.916s 00:11:31.508 user 0m1.836s 00:11:31.508 sys 0m2.522s 00:11:31.508 13:29:44 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:31.508 13:29:44 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:11:31.508 ************************************ 00:11:31.508 END TEST devices 00:11:31.508 ************************************ 00:11:31.508 00:11:31.508 real 0m23.281s 00:11:31.508 user 0m7.249s 00:11:31.508 sys 0m10.311s 00:11:31.508 13:29:44 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:31.508 13:29:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:31.508 ************************************ 00:11:31.508 END TEST setup.sh 00:11:31.508 ************************************ 00:11:31.766 13:29:44 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:11:32.331 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:32.331 Hugepages 00:11:32.331 node hugesize free / total 00:11:32.331 node0 1048576kB 0 / 0 00:11:32.331 node0 2048kB 2048 / 2048 00:11:32.331 00:11:32.331 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:32.331 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:11:32.589 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:11:32.589 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:11:32.589 13:29:45 -- spdk/autotest.sh@130 -- # uname -s 00:11:32.589 13:29:45 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:11:32.589 13:29:45 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:11:32.589 13:29:45 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:33.153 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:33.410 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:33.410 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:33.410 13:29:46 -- common/autotest_common.sh@1528 -- # sleep 1 00:11:34.368 13:29:47 -- common/autotest_common.sh@1529 -- # bdfs=() 00:11:34.368 13:29:47 -- common/autotest_common.sh@1529 -- # local bdfs 00:11:34.368 13:29:47 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:11:34.368 13:29:47 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:11:34.368 13:29:47 -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:34.368 13:29:47 -- common/autotest_common.sh@1509 -- # local bdfs 00:11:34.368 13:29:47 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:34.368 13:29:47 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:11:34.368 13:29:47 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:34.626 13:29:47 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:11:34.626 13:29:47 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:11:34.626 13:29:47 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:34.884 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:34.884 Waiting for block devices as requested 00:11:34.884 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:35.144 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:35.144 13:29:48 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:11:35.144 13:29:48 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:11:35.144 13:29:48 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:11:35.144 13:29:48 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:11:35.144 13:29:48 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:11:35.144 13:29:48 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:11:35.144 13:29:48 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:11:35.144 13:29:48 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:11:35.144 13:29:48 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:11:35.144 13:29:48 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:11:35.144 13:29:48 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:11:35.144 13:29:48 -- common/autotest_common.sh@1541 -- # grep oacs 00:11:35.144 13:29:48 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:11:35.144 13:29:48 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:11:35.144 13:29:48 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:11:35.144 13:29:48 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:11:35.144 13:29:48 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:11:35.144 13:29:48 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:11:35.144 13:29:48 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:11:35.144 13:29:48 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:11:35.144 13:29:48 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:11:35.144 13:29:48 -- common/autotest_common.sh@1553 -- # continue 00:11:35.144 13:29:48 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:11:35.144 13:29:48 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:11:35.144 13:29:48 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:11:35.144 13:29:48 -- common/autotest_common.sh@1498 -- # grep 0000:00:11.0/nvme/nvme 00:11:35.144 13:29:48 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:11:35.144 13:29:48 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:11:35.144 13:29:48 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:11:35.144 13:29:48 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:11:35.144 13:29:48 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:11:35.144 13:29:48 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:11:35.144 13:29:48 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:11:35.144 13:29:48 -- common/autotest_common.sh@1541 -- # grep oacs 00:11:35.144 13:29:48 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:11:35.144 13:29:48 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:11:35.144 13:29:48 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:11:35.144 13:29:48 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:11:35.144 13:29:48 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:11:35.144 13:29:48 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:11:35.144 13:29:48 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:11:35.144 13:29:48 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:11:35.144 13:29:48 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:11:35.144 13:29:48 -- common/autotest_common.sh@1553 -- # continue 00:11:35.144 13:29:48 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:11:35.144 13:29:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:35.144 13:29:48 -- common/autotest_common.sh@10 -- # set +x 00:11:35.144 13:29:48 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:11:35.144 13:29:48 -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:35.144 13:29:48 -- common/autotest_common.sh@10 -- # set +x 00:11:35.144 13:29:48 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:35.774 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:36.032 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:36.032 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:36.032 13:29:49 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:11:36.032 13:29:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:36.032 13:29:49 -- common/autotest_common.sh@10 -- # set +x 00:11:36.032 13:29:49 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:11:36.032 13:29:49 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:11:36.032 13:29:49 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:11:36.032 13:29:49 -- common/autotest_common.sh@1573 -- # bdfs=() 00:11:36.032 13:29:49 -- common/autotest_common.sh@1573 -- # local bdfs 00:11:36.290 13:29:49 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:11:36.290 13:29:49 -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:36.290 13:29:49 -- common/autotest_common.sh@1509 -- # local bdfs 00:11:36.290 13:29:49 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:36.290 13:29:49 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:36.290 13:29:49 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:11:36.290 13:29:49 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:11:36.290 13:29:49 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:11:36.290 13:29:49 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:11:36.290 13:29:49 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:11:36.290 13:29:49 -- common/autotest_common.sh@1576 -- # device=0x0010 00:11:36.290 13:29:49 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:36.290 13:29:49 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:11:36.290 13:29:49 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:11:36.290 13:29:49 -- common/autotest_common.sh@1576 -- # device=0x0010 00:11:36.290 13:29:49 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:36.290 13:29:49 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:11:36.290 13:29:49 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:11:36.290 13:29:49 -- common/autotest_common.sh@1589 -- # return 0 00:11:36.290 13:29:49 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:11:36.290 13:29:49 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:11:36.290 13:29:49 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:11:36.290 13:29:49 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:11:36.290 13:29:49 -- spdk/autotest.sh@162 -- # timing_enter lib 00:11:36.290 13:29:49 -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:36.290 13:29:49 -- common/autotest_common.sh@10 -- # set +x 00:11:36.290 13:29:49 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:11:36.290 13:29:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:36.290 13:29:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:36.290 13:29:49 -- common/autotest_common.sh@10 -- # set +x 00:11:36.290 ************************************ 00:11:36.290 START TEST env 00:11:36.290 ************************************ 00:11:36.290 13:29:49 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:11:36.290 * Looking for test storage... 00:11:36.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:11:36.290 13:29:49 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:11:36.290 13:29:49 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:36.290 13:29:49 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:36.290 13:29:49 env -- common/autotest_common.sh@10 -- # set +x 00:11:36.290 ************************************ 00:11:36.290 START TEST env_memory 00:11:36.290 ************************************ 00:11:36.290 13:29:49 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:11:36.290 00:11:36.290 00:11:36.290 CUnit - A unit testing framework for C - Version 2.1-3 00:11:36.290 http://cunit.sourceforge.net/ 00:11:36.290 00:11:36.290 00:11:36.290 Suite: memory 00:11:36.290 Test: alloc and free memory map ...[2024-05-15 13:29:49.323470] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:11:36.290 passed 00:11:36.290 Test: mem map translation ...[2024-05-15 13:29:49.349394] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:11:36.290 [2024-05-15 13:29:49.349466] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:11:36.290 [2024-05-15 13:29:49.349513] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:11:36.290 [2024-05-15 13:29:49.349524] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:11:36.548 passed 00:11:36.548 Test: mem map registration ...[2024-05-15 13:29:49.399649] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:11:36.548 [2024-05-15 13:29:49.399713] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:11:36.548 passed 00:11:36.548 Test: mem map adjacent registrations ...passed 00:11:36.548 00:11:36.548 Run Summary: Type Total Ran Passed Failed Inactive 00:11:36.548 suites 1 1 n/a 0 0 00:11:36.548 tests 4 4 4 0 0 00:11:36.548 asserts 152 152 152 0 n/a 00:11:36.548 00:11:36.548 Elapsed time = 0.162 seconds 00:11:36.548 00:11:36.548 real 0m0.180s 00:11:36.548 user 0m0.162s 00:11:36.548 sys 0m0.012s 00:11:36.548 13:29:49 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:36.548 13:29:49 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:11:36.548 ************************************ 00:11:36.548 END TEST env_memory 00:11:36.548 ************************************ 00:11:36.548 13:29:49 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:11:36.548 13:29:49 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:36.548 13:29:49 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:36.548 13:29:49 env -- common/autotest_common.sh@10 -- # set +x 00:11:36.548 ************************************ 00:11:36.548 START TEST env_vtophys 00:11:36.548 ************************************ 00:11:36.548 13:29:49 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:11:36.548 EAL: lib.eal log level changed from notice to debug 00:11:36.548 EAL: Detected lcore 0 as core 0 on socket 0 00:11:36.548 EAL: Detected lcore 1 as core 0 on socket 0 00:11:36.548 EAL: Detected lcore 2 as core 0 on socket 0 00:11:36.548 EAL: Detected lcore 3 as core 0 on socket 0 00:11:36.548 EAL: Detected lcore 4 as core 0 on socket 0 00:11:36.548 EAL: Detected lcore 5 as core 0 on socket 0 00:11:36.548 EAL: Detected lcore 6 as core 0 on socket 0 00:11:36.549 EAL: Detected lcore 7 as core 0 on socket 0 00:11:36.549 EAL: Detected lcore 8 as core 0 on socket 0 00:11:36.549 EAL: Detected lcore 9 as core 0 on socket 0 00:11:36.549 EAL: Maximum logical cores by configuration: 128 00:11:36.549 EAL: Detected CPU lcores: 10 00:11:36.549 EAL: Detected NUMA nodes: 1 00:11:36.549 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:11:36.549 EAL: Detected shared linkage of DPDK 00:11:36.549 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:11:36.549 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:11:36.549 EAL: Registered [vdev] bus. 00:11:36.549 EAL: bus.vdev log level changed from disabled to notice 00:11:36.549 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:11:36.549 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:11:36.549 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:11:36.549 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:11:36.549 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:11:36.549 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:11:36.549 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:11:36.549 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:11:36.549 EAL: No shared files mode enabled, IPC will be disabled 00:11:36.549 EAL: No shared files mode enabled, IPC is disabled 00:11:36.549 EAL: Selected IOVA mode 'PA' 00:11:36.549 EAL: Probing VFIO support... 00:11:36.549 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:11:36.549 EAL: VFIO modules not loaded, skipping VFIO support... 00:11:36.549 EAL: Ask a virtual area of 0x2e000 bytes 00:11:36.549 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:11:36.549 EAL: Setting up physically contiguous memory... 00:11:36.549 EAL: Setting maximum number of open files to 524288 00:11:36.549 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:11:36.549 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:11:36.549 EAL: Ask a virtual area of 0x61000 bytes 00:11:36.549 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:11:36.549 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:36.549 EAL: Ask a virtual area of 0x400000000 bytes 00:11:36.549 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:11:36.549 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:11:36.549 EAL: Ask a virtual area of 0x61000 bytes 00:11:36.549 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:11:36.549 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:36.549 EAL: Ask a virtual area of 0x400000000 bytes 00:11:36.549 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:11:36.549 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:11:36.549 EAL: Ask a virtual area of 0x61000 bytes 00:11:36.549 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:11:36.549 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:36.549 EAL: Ask a virtual area of 0x400000000 bytes 00:11:36.549 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:11:36.549 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:11:36.549 EAL: Ask a virtual area of 0x61000 bytes 00:11:36.549 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:11:36.549 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:36.549 EAL: Ask a virtual area of 0x400000000 bytes 00:11:36.549 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:11:36.549 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:11:36.549 EAL: Hugepages will be freed exactly as allocated. 00:11:36.549 EAL: No shared files mode enabled, IPC is disabled 00:11:36.549 EAL: No shared files mode enabled, IPC is disabled 00:11:36.808 EAL: TSC frequency is ~2100000 KHz 00:11:36.808 EAL: Main lcore 0 is ready (tid=7f89494d4a00;cpuset=[0]) 00:11:36.808 EAL: Trying to obtain current memory policy. 00:11:36.808 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:36.808 EAL: Restoring previous memory policy: 0 00:11:36.808 EAL: request: mp_malloc_sync 00:11:36.808 EAL: No shared files mode enabled, IPC is disabled 00:11:36.808 EAL: Heap on socket 0 was expanded by 2MB 00:11:36.808 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:11:36.808 EAL: No shared files mode enabled, IPC is disabled 00:11:36.808 EAL: No PCI address specified using 'addr=' in: bus=pci 00:11:36.808 EAL: Mem event callback 'spdk:(nil)' registered 00:11:36.808 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:11:36.808 00:11:36.808 00:11:36.808 CUnit - A unit testing framework for C - Version 2.1-3 00:11:36.808 http://cunit.sourceforge.net/ 00:11:36.808 00:11:36.808 00:11:36.808 Suite: components_suite 00:11:36.808 Test: vtophys_malloc_test ...passed 00:11:36.808 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:11:36.808 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:36.808 EAL: Restoring previous memory policy: 4 00:11:36.808 EAL: Calling mem event callback 'spdk:(nil)' 00:11:36.808 EAL: request: mp_malloc_sync 00:11:36.808 EAL: No shared files mode enabled, IPC is disabled 00:11:36.808 EAL: Heap on socket 0 was expanded by 4MB 00:11:36.808 EAL: Calling mem event callback 'spdk:(nil)' 00:11:36.808 EAL: request: mp_malloc_sync 00:11:36.808 EAL: No shared files mode enabled, IPC is disabled 00:11:36.808 EAL: Heap on socket 0 was shrunk by 4MB 00:11:36.808 EAL: Trying to obtain current memory policy. 00:11:36.808 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:36.808 EAL: Restoring previous memory policy: 4 00:11:36.808 EAL: Calling mem event callback 'spdk:(nil)' 00:11:36.808 EAL: request: mp_malloc_sync 00:11:36.808 EAL: No shared files mode enabled, IPC is disabled 00:11:36.808 EAL: Heap on socket 0 was expanded by 6MB 00:11:36.808 EAL: Calling mem event callback 'spdk:(nil)' 00:11:36.808 EAL: request: mp_malloc_sync 00:11:36.808 EAL: No shared files mode enabled, IPC is disabled 00:11:36.808 EAL: Heap on socket 0 was shrunk by 6MB 00:11:36.808 EAL: Trying to obtain current memory policy. 00:11:36.808 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:36.808 EAL: Restoring previous memory policy: 4 00:11:36.808 EAL: Calling mem event callback 'spdk:(nil)' 00:11:36.808 EAL: request: mp_malloc_sync 00:11:36.808 EAL: No shared files mode enabled, IPC is disabled 00:11:36.808 EAL: Heap on socket 0 was expanded by 10MB 00:11:36.808 EAL: Calling mem event callback 'spdk:(nil)' 00:11:36.808 EAL: request: mp_malloc_sync 00:11:36.808 EAL: No shared files mode enabled, IPC is disabled 00:11:36.808 EAL: Heap on socket 0 was shrunk by 10MB 00:11:36.808 EAL: Trying to obtain current memory policy. 00:11:36.808 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:36.808 EAL: Restoring previous memory policy: 4 00:11:36.809 EAL: Calling mem event callback 'spdk:(nil)' 00:11:36.809 EAL: request: mp_malloc_sync 00:11:36.809 EAL: No shared files mode enabled, IPC is disabled 00:11:36.809 EAL: Heap on socket 0 was expanded by 18MB 00:11:36.809 EAL: Calling mem event callback 'spdk:(nil)' 00:11:36.809 EAL: request: mp_malloc_sync 00:11:36.809 EAL: No shared files mode enabled, IPC is disabled 00:11:36.809 EAL: Heap on socket 0 was shrunk by 18MB 00:11:36.809 EAL: Trying to obtain current memory policy. 00:11:36.809 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:36.809 EAL: Restoring previous memory policy: 4 00:11:36.809 EAL: Calling mem event callback 'spdk:(nil)' 00:11:36.809 EAL: request: mp_malloc_sync 00:11:36.809 EAL: No shared files mode enabled, IPC is disabled 00:11:36.809 EAL: Heap on socket 0 was expanded by 34MB 00:11:36.809 EAL: Calling mem event callback 'spdk:(nil)' 00:11:36.809 EAL: request: mp_malloc_sync 00:11:36.809 EAL: No shared files mode enabled, IPC is disabled 00:11:36.809 EAL: Heap on socket 0 was shrunk by 34MB 00:11:36.809 EAL: Trying to obtain current memory policy. 00:11:36.809 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:36.809 EAL: Restoring previous memory policy: 4 00:11:36.809 EAL: Calling mem event callback 'spdk:(nil)' 00:11:36.809 EAL: request: mp_malloc_sync 00:11:36.809 EAL: No shared files mode enabled, IPC is disabled 00:11:36.809 EAL: Heap on socket 0 was expanded by 66MB 00:11:36.809 EAL: Calling mem event callback 'spdk:(nil)' 00:11:36.809 EAL: request: mp_malloc_sync 00:11:36.809 EAL: No shared files mode enabled, IPC is disabled 00:11:36.809 EAL: Heap on socket 0 was shrunk by 66MB 00:11:36.809 EAL: Trying to obtain current memory policy. 00:11:36.809 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:36.809 EAL: Restoring previous memory policy: 4 00:11:36.809 EAL: Calling mem event callback 'spdk:(nil)' 00:11:36.809 EAL: request: mp_malloc_sync 00:11:36.809 EAL: No shared files mode enabled, IPC is disabled 00:11:36.809 EAL: Heap on socket 0 was expanded by 130MB 00:11:36.809 EAL: Calling mem event callback 'spdk:(nil)' 00:11:36.809 EAL: request: mp_malloc_sync 00:11:36.809 EAL: No shared files mode enabled, IPC is disabled 00:11:36.809 EAL: Heap on socket 0 was shrunk by 130MB 00:11:36.809 EAL: Trying to obtain current memory policy. 00:11:36.809 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:36.809 EAL: Restoring previous memory policy: 4 00:11:36.809 EAL: Calling mem event callback 'spdk:(nil)' 00:11:36.809 EAL: request: mp_malloc_sync 00:11:36.809 EAL: No shared files mode enabled, IPC is disabled 00:11:36.809 EAL: Heap on socket 0 was expanded by 258MB 00:11:37.068 EAL: Calling mem event callback 'spdk:(nil)' 00:11:37.068 EAL: request: mp_malloc_sync 00:11:37.068 EAL: No shared files mode enabled, IPC is disabled 00:11:37.068 EAL: Heap on socket 0 was shrunk by 258MB 00:11:37.068 EAL: Trying to obtain current memory policy. 00:11:37.068 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:37.068 EAL: Restoring previous memory policy: 4 00:11:37.068 EAL: Calling mem event callback 'spdk:(nil)' 00:11:37.068 EAL: request: mp_malloc_sync 00:11:37.068 EAL: No shared files mode enabled, IPC is disabled 00:11:37.068 EAL: Heap on socket 0 was expanded by 514MB 00:11:37.068 EAL: Calling mem event callback 'spdk:(nil)' 00:11:37.327 EAL: request: mp_malloc_sync 00:11:37.327 EAL: No shared files mode enabled, IPC is disabled 00:11:37.327 EAL: Heap on socket 0 was shrunk by 514MB 00:11:37.327 EAL: Trying to obtain current memory policy. 00:11:37.327 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:37.586 EAL: Restoring previous memory policy: 4 00:11:37.586 EAL: Calling mem event callback 'spdk:(nil)' 00:11:37.586 EAL: request: mp_malloc_sync 00:11:37.586 EAL: No shared files mode enabled, IPC is disabled 00:11:37.586 EAL: Heap on socket 0 was expanded by 1026MB 00:11:37.586 EAL: Calling mem event callback 'spdk:(nil)' 00:11:37.845 EAL: request: mp_malloc_sync 00:11:37.845 EAL: No shared files mode enabled, IPC is disabled 00:11:37.845 EAL: Heap on socket 0 was shrunk by 1026MB 00:11:37.845 passed 00:11:37.845 00:11:37.845 Run Summary: Type Total Ran Passed Failed Inactive 00:11:37.845 suites 1 1 n/a 0 0 00:11:37.845 tests 2 2 2 0 0 00:11:37.845 asserts 6466 6466 6466 0 n/a 00:11:37.845 00:11:37.845 Elapsed time = 1.056 seconds 00:11:37.845 EAL: Calling mem event callback 'spdk:(nil)' 00:11:37.845 EAL: request: mp_malloc_sync 00:11:37.845 EAL: No shared files mode enabled, IPC is disabled 00:11:37.845 EAL: Heap on socket 0 was shrunk by 2MB 00:11:37.845 EAL: No shared files mode enabled, IPC is disabled 00:11:37.845 EAL: No shared files mode enabled, IPC is disabled 00:11:37.845 EAL: No shared files mode enabled, IPC is disabled 00:11:37.845 00:11:37.845 real 0m1.266s 00:11:37.845 user 0m0.658s 00:11:37.845 sys 0m0.470s 00:11:37.845 13:29:50 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:37.845 13:29:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:11:37.845 ************************************ 00:11:37.845 END TEST env_vtophys 00:11:37.845 ************************************ 00:11:37.845 13:29:50 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:11:37.845 13:29:50 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:37.845 13:29:50 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:37.845 13:29:50 env -- common/autotest_common.sh@10 -- # set +x 00:11:37.845 ************************************ 00:11:37.845 START TEST env_pci 00:11:37.845 ************************************ 00:11:37.845 13:29:50 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:11:37.845 00:11:37.845 00:11:37.845 CUnit - A unit testing framework for C - Version 2.1-3 00:11:37.845 http://cunit.sourceforge.net/ 00:11:37.845 00:11:37.845 00:11:37.845 Suite: pci 00:11:37.845 Test: pci_hook ...[2024-05-15 13:29:50.831071] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 71762 has claimed it 00:11:37.845 passed 00:11:37.845 00:11:37.845 Run Summary: Type Total Ran Passed Failed Inactive 00:11:37.845 suites 1 1 n/a 0 0 00:11:37.845 EAL: Cannot find device (10000:00:01.0) 00:11:37.845 EAL: Failed to attach device on primary process 00:11:37.845 tests 1 1 1 0 0 00:11:37.845 asserts 25 25 25 0 n/a 00:11:37.845 00:11:37.845 Elapsed time = 0.004 seconds 00:11:37.845 00:11:37.845 real 0m0.023s 00:11:37.845 user 0m0.010s 00:11:37.845 sys 0m0.013s 00:11:37.845 13:29:50 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:37.845 13:29:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:11:37.845 ************************************ 00:11:37.845 END TEST env_pci 00:11:37.845 ************************************ 00:11:37.845 13:29:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:11:37.845 13:29:50 env -- env/env.sh@15 -- # uname 00:11:37.845 13:29:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:11:37.845 13:29:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:11:37.845 13:29:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:37.845 13:29:50 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:11:37.845 13:29:50 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:37.845 13:29:50 env -- common/autotest_common.sh@10 -- # set +x 00:11:37.845 ************************************ 00:11:37.845 START TEST env_dpdk_post_init 00:11:37.845 ************************************ 00:11:37.845 13:29:50 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:37.845 EAL: Detected CPU lcores: 10 00:11:37.845 EAL: Detected NUMA nodes: 1 00:11:37.845 EAL: Detected shared linkage of DPDK 00:11:38.103 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:38.103 EAL: Selected IOVA mode 'PA' 00:11:38.103 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:38.103 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:11:38.103 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:11:38.103 Starting DPDK initialization... 00:11:38.103 Starting SPDK post initialization... 00:11:38.103 SPDK NVMe probe 00:11:38.103 Attaching to 0000:00:10.0 00:11:38.103 Attaching to 0000:00:11.0 00:11:38.103 Attached to 0000:00:10.0 00:11:38.103 Attached to 0000:00:11.0 00:11:38.103 Cleaning up... 00:11:38.103 00:11:38.103 real 0m0.202s 00:11:38.103 user 0m0.052s 00:11:38.103 sys 0m0.048s 00:11:38.103 13:29:51 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:38.103 13:29:51 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:11:38.103 ************************************ 00:11:38.103 END TEST env_dpdk_post_init 00:11:38.103 ************************************ 00:11:38.103 13:29:51 env -- env/env.sh@26 -- # uname 00:11:38.103 13:29:51 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:11:38.103 13:29:51 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:38.103 13:29:51 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:38.103 13:29:51 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:38.103 13:29:51 env -- common/autotest_common.sh@10 -- # set +x 00:11:38.103 ************************************ 00:11:38.103 START TEST env_mem_callbacks 00:11:38.103 ************************************ 00:11:38.103 13:29:51 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:38.103 EAL: Detected CPU lcores: 10 00:11:38.103 EAL: Detected NUMA nodes: 1 00:11:38.103 EAL: Detected shared linkage of DPDK 00:11:38.103 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:38.103 EAL: Selected IOVA mode 'PA' 00:11:38.360 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:38.360 00:11:38.360 00:11:38.360 CUnit - A unit testing framework for C - Version 2.1-3 00:11:38.360 http://cunit.sourceforge.net/ 00:11:38.360 00:11:38.360 00:11:38.360 Suite: memory 00:11:38.360 Test: test ... 00:11:38.360 register 0x200000200000 2097152 00:11:38.360 malloc 3145728 00:11:38.360 register 0x200000400000 4194304 00:11:38.361 buf 0x200000500000 len 3145728 PASSED 00:11:38.361 malloc 64 00:11:38.361 buf 0x2000004fff40 len 64 PASSED 00:11:38.361 malloc 4194304 00:11:38.361 register 0x200000800000 6291456 00:11:38.361 buf 0x200000a00000 len 4194304 PASSED 00:11:38.361 free 0x200000500000 3145728 00:11:38.361 free 0x2000004fff40 64 00:11:38.361 unregister 0x200000400000 4194304 PASSED 00:11:38.361 free 0x200000a00000 4194304 00:11:38.361 unregister 0x200000800000 6291456 PASSED 00:11:38.361 malloc 8388608 00:11:38.361 register 0x200000400000 10485760 00:11:38.361 buf 0x200000600000 len 8388608 PASSED 00:11:38.361 free 0x200000600000 8388608 00:11:38.361 unregister 0x200000400000 10485760 PASSED 00:11:38.361 passed 00:11:38.361 00:11:38.361 Run Summary: Type Total Ran Passed Failed Inactive 00:11:38.361 suites 1 1 n/a 0 0 00:11:38.361 tests 1 1 1 0 0 00:11:38.361 asserts 15 15 15 0 n/a 00:11:38.361 00:11:38.361 Elapsed time = 0.010 seconds 00:11:38.361 00:11:38.361 real 0m0.148s 00:11:38.361 user 0m0.018s 00:11:38.361 sys 0m0.028s 00:11:38.361 13:29:51 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:38.361 13:29:51 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:11:38.361 ************************************ 00:11:38.361 END TEST env_mem_callbacks 00:11:38.361 ************************************ 00:11:38.361 00:11:38.361 real 0m2.120s 00:11:38.361 user 0m1.002s 00:11:38.361 sys 0m0.777s 00:11:38.361 13:29:51 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:38.361 13:29:51 env -- common/autotest_common.sh@10 -- # set +x 00:11:38.361 ************************************ 00:11:38.361 END TEST env 00:11:38.361 ************************************ 00:11:38.361 13:29:51 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:38.361 13:29:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:38.361 13:29:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:38.361 13:29:51 -- common/autotest_common.sh@10 -- # set +x 00:11:38.361 ************************************ 00:11:38.361 START TEST rpc 00:11:38.361 ************************************ 00:11:38.361 13:29:51 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:38.361 * Looking for test storage... 00:11:38.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:38.361 13:29:51 rpc -- rpc/rpc.sh@65 -- # spdk_pid=71870 00:11:38.361 13:29:51 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:11:38.361 13:29:51 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:38.361 13:29:51 rpc -- rpc/rpc.sh@67 -- # waitforlisten 71870 00:11:38.361 13:29:51 rpc -- common/autotest_common.sh@827 -- # '[' -z 71870 ']' 00:11:38.361 13:29:51 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.361 13:29:51 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:38.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.361 13:29:51 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.361 13:29:51 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:38.361 13:29:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.619 [2024-05-15 13:29:51.530130] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:11:38.619 [2024-05-15 13:29:51.530318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71870 ] 00:11:38.619 [2024-05-15 13:29:51.667969] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:38.619 [2024-05-15 13:29:51.679506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.877 [2024-05-15 13:29:51.734984] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:11:38.877 [2024-05-15 13:29:51.735055] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 71870' to capture a snapshot of events at runtime. 00:11:38.877 [2024-05-15 13:29:51.735067] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.877 [2024-05-15 13:29:51.735077] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.877 [2024-05-15 13:29:51.735086] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid71870 for offline analysis/debug. 00:11:38.877 [2024-05-15 13:29:51.735131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.446 13:29:52 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:39.446 13:29:52 rpc -- common/autotest_common.sh@860 -- # return 0 00:11:39.446 13:29:52 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:39.446 13:29:52 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:39.446 13:29:52 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:11:39.446 13:29:52 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:11:39.446 13:29:52 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:39.446 13:29:52 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:39.446 13:29:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.446 ************************************ 00:11:39.446 START TEST rpc_integrity 00:11:39.446 ************************************ 00:11:39.704 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:11:39.704 13:29:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.704 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.704 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:39.704 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.704 13:29:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:39.704 13:29:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:39.704 13:29:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:39.704 13:29:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:39.704 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.704 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:39.704 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.704 13:29:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:11:39.704 13:29:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:39.704 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.704 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:39.704 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.704 13:29:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:39.704 { 00:11:39.704 "name": "Malloc0", 00:11:39.704 "aliases": [ 00:11:39.704 "4e9903d7-6f9b-4892-9213-0305c4d716ed" 00:11:39.704 ], 00:11:39.704 "product_name": "Malloc disk", 00:11:39.704 "block_size": 512, 00:11:39.704 "num_blocks": 16384, 00:11:39.704 "uuid": "4e9903d7-6f9b-4892-9213-0305c4d716ed", 00:11:39.704 "assigned_rate_limits": { 00:11:39.704 "rw_ios_per_sec": 0, 00:11:39.704 "rw_mbytes_per_sec": 0, 00:11:39.704 "r_mbytes_per_sec": 0, 00:11:39.704 "w_mbytes_per_sec": 0 00:11:39.704 }, 00:11:39.704 "claimed": false, 00:11:39.704 "zoned": false, 00:11:39.704 "supported_io_types": { 00:11:39.704 "read": true, 00:11:39.704 "write": true, 00:11:39.704 "unmap": true, 00:11:39.704 "write_zeroes": true, 00:11:39.704 "flush": true, 00:11:39.704 "reset": true, 00:11:39.704 "compare": false, 00:11:39.704 "compare_and_write": false, 00:11:39.704 "abort": true, 00:11:39.704 "nvme_admin": false, 00:11:39.704 "nvme_io": false 00:11:39.704 }, 00:11:39.704 "memory_domains": [ 00:11:39.704 { 00:11:39.704 "dma_device_id": "system", 00:11:39.704 "dma_device_type": 1 00:11:39.704 }, 00:11:39.704 { 00:11:39.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.704 "dma_device_type": 2 00:11:39.704 } 00:11:39.704 ], 00:11:39.704 "driver_specific": {} 00:11:39.704 } 00:11:39.704 ]' 00:11:39.704 13:29:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:39.704 13:29:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:39.704 13:29:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:11:39.704 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.704 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:39.704 [2024-05-15 13:29:52.684011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:11:39.704 [2024-05-15 13:29:52.684260] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.704 [2024-05-15 13:29:52.684376] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x964bf0 00:11:39.704 [2024-05-15 13:29:52.684457] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.704 [2024-05-15 13:29:52.686145] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.704 [2024-05-15 13:29:52.686358] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:39.704 Passthru0 00:11:39.704 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.704 13:29:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:39.704 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.704 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:39.704 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.704 13:29:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:39.704 { 00:11:39.704 "name": "Malloc0", 00:11:39.704 "aliases": [ 00:11:39.704 "4e9903d7-6f9b-4892-9213-0305c4d716ed" 00:11:39.704 ], 00:11:39.704 "product_name": "Malloc disk", 00:11:39.704 "block_size": 512, 00:11:39.704 "num_blocks": 16384, 00:11:39.704 "uuid": "4e9903d7-6f9b-4892-9213-0305c4d716ed", 00:11:39.704 "assigned_rate_limits": { 00:11:39.704 "rw_ios_per_sec": 0, 00:11:39.704 "rw_mbytes_per_sec": 0, 00:11:39.704 "r_mbytes_per_sec": 0, 00:11:39.704 "w_mbytes_per_sec": 0 00:11:39.704 }, 00:11:39.704 "claimed": true, 00:11:39.704 "claim_type": "exclusive_write", 00:11:39.704 "zoned": false, 00:11:39.704 "supported_io_types": { 00:11:39.704 "read": true, 00:11:39.704 "write": true, 00:11:39.704 "unmap": true, 00:11:39.704 "write_zeroes": true, 00:11:39.704 "flush": true, 00:11:39.704 "reset": true, 00:11:39.704 "compare": false, 00:11:39.704 "compare_and_write": false, 00:11:39.704 "abort": true, 00:11:39.704 "nvme_admin": false, 00:11:39.704 "nvme_io": false 00:11:39.704 }, 00:11:39.704 "memory_domains": [ 00:11:39.704 { 00:11:39.704 "dma_device_id": "system", 00:11:39.704 "dma_device_type": 1 00:11:39.704 }, 00:11:39.704 { 00:11:39.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.704 "dma_device_type": 2 00:11:39.704 } 00:11:39.704 ], 00:11:39.704 "driver_specific": {} 00:11:39.704 }, 00:11:39.704 { 00:11:39.704 "name": "Passthru0", 00:11:39.704 "aliases": [ 00:11:39.704 "e6459b23-df75-5646-9320-94a272c38357" 00:11:39.704 ], 00:11:39.704 "product_name": "passthru", 00:11:39.704 "block_size": 512, 00:11:39.704 "num_blocks": 16384, 00:11:39.704 "uuid": "e6459b23-df75-5646-9320-94a272c38357", 00:11:39.704 "assigned_rate_limits": { 00:11:39.704 "rw_ios_per_sec": 0, 00:11:39.704 "rw_mbytes_per_sec": 0, 00:11:39.704 "r_mbytes_per_sec": 0, 00:11:39.704 "w_mbytes_per_sec": 0 00:11:39.704 }, 00:11:39.704 "claimed": false, 00:11:39.704 "zoned": false, 00:11:39.704 "supported_io_types": { 00:11:39.704 "read": true, 00:11:39.704 "write": true, 00:11:39.704 "unmap": true, 00:11:39.704 "write_zeroes": true, 00:11:39.704 "flush": true, 00:11:39.704 "reset": true, 00:11:39.704 "compare": false, 00:11:39.704 "compare_and_write": false, 00:11:39.704 "abort": true, 00:11:39.704 "nvme_admin": false, 00:11:39.704 "nvme_io": false 00:11:39.704 }, 00:11:39.704 "memory_domains": [ 00:11:39.704 { 00:11:39.705 "dma_device_id": "system", 00:11:39.705 "dma_device_type": 1 00:11:39.705 }, 00:11:39.705 { 00:11:39.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.705 "dma_device_type": 2 00:11:39.705 } 00:11:39.705 ], 00:11:39.705 "driver_specific": { 00:11:39.705 "passthru": { 00:11:39.705 "name": "Passthru0", 00:11:39.705 "base_bdev_name": "Malloc0" 00:11:39.705 } 00:11:39.705 } 00:11:39.705 } 00:11:39.705 ]' 00:11:39.705 13:29:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:39.705 13:29:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:39.705 13:29:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:39.705 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.705 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:39.705 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.705 13:29:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:11:39.705 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.705 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:39.705 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.705 13:29:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:39.705 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.705 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:39.705 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.705 13:29:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:39.705 13:29:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:39.963 13:29:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:39.963 00:11:39.963 real 0m0.293s 00:11:39.963 user 0m0.181s 00:11:39.963 sys 0m0.041s 00:11:39.963 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:39.963 13:29:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:39.963 ************************************ 00:11:39.963 END TEST rpc_integrity 00:11:39.963 ************************************ 00:11:39.963 13:29:52 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:11:39.963 13:29:52 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:39.963 13:29:52 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:39.963 13:29:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.963 ************************************ 00:11:39.963 START TEST rpc_plugins 00:11:39.963 ************************************ 00:11:39.963 13:29:52 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:11:39.963 13:29:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:11:39.963 13:29:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.963 13:29:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:39.963 13:29:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.963 13:29:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:11:39.963 13:29:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:11:39.963 13:29:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.963 13:29:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:39.963 13:29:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.963 13:29:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:11:39.963 { 00:11:39.963 "name": "Malloc1", 00:11:39.963 "aliases": [ 00:11:39.963 "e873523c-10ac-4324-a418-2c3d12f04ea1" 00:11:39.963 ], 00:11:39.963 "product_name": "Malloc disk", 00:11:39.963 "block_size": 4096, 00:11:39.963 "num_blocks": 256, 00:11:39.963 "uuid": "e873523c-10ac-4324-a418-2c3d12f04ea1", 00:11:39.963 "assigned_rate_limits": { 00:11:39.963 "rw_ios_per_sec": 0, 00:11:39.963 "rw_mbytes_per_sec": 0, 00:11:39.963 "r_mbytes_per_sec": 0, 00:11:39.963 "w_mbytes_per_sec": 0 00:11:39.963 }, 00:11:39.963 "claimed": false, 00:11:39.963 "zoned": false, 00:11:39.963 "supported_io_types": { 00:11:39.963 "read": true, 00:11:39.963 "write": true, 00:11:39.963 "unmap": true, 00:11:39.963 "write_zeroes": true, 00:11:39.963 "flush": true, 00:11:39.963 "reset": true, 00:11:39.963 "compare": false, 00:11:39.963 "compare_and_write": false, 00:11:39.963 "abort": true, 00:11:39.963 "nvme_admin": false, 00:11:39.963 "nvme_io": false 00:11:39.963 }, 00:11:39.963 "memory_domains": [ 00:11:39.963 { 00:11:39.963 "dma_device_id": "system", 00:11:39.963 "dma_device_type": 1 00:11:39.963 }, 00:11:39.963 { 00:11:39.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.963 "dma_device_type": 2 00:11:39.963 } 00:11:39.963 ], 00:11:39.963 "driver_specific": {} 00:11:39.963 } 00:11:39.963 ]' 00:11:39.963 13:29:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:11:39.963 13:29:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:11:39.963 13:29:52 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:11:39.963 13:29:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.963 13:29:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:39.963 13:29:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.963 13:29:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:11:39.963 13:29:52 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.963 13:29:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:39.963 13:29:52 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.963 13:29:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:11:39.963 13:29:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:11:39.963 13:29:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:11:39.963 00:11:39.963 real 0m0.163s 00:11:39.963 user 0m0.101s 00:11:39.963 sys 0m0.019s 00:11:39.964 13:29:53 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:39.964 13:29:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:39.964 ************************************ 00:11:39.964 END TEST rpc_plugins 00:11:39.964 ************************************ 00:11:40.222 13:29:53 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:11:40.222 13:29:53 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:40.222 13:29:53 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:40.222 13:29:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.222 ************************************ 00:11:40.222 START TEST rpc_trace_cmd_test 00:11:40.222 ************************************ 00:11:40.222 13:29:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:11:40.222 13:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:11:40.222 13:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:11:40.222 13:29:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.222 13:29:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.222 13:29:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.222 13:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:11:40.222 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid71870", 00:11:40.222 "tpoint_group_mask": "0x8", 00:11:40.222 "iscsi_conn": { 00:11:40.222 "mask": "0x2", 00:11:40.222 "tpoint_mask": "0x0" 00:11:40.222 }, 00:11:40.222 "scsi": { 00:11:40.222 "mask": "0x4", 00:11:40.222 "tpoint_mask": "0x0" 00:11:40.222 }, 00:11:40.222 "bdev": { 00:11:40.222 "mask": "0x8", 00:11:40.222 "tpoint_mask": "0xffffffffffffffff" 00:11:40.222 }, 00:11:40.222 "nvmf_rdma": { 00:11:40.222 "mask": "0x10", 00:11:40.222 "tpoint_mask": "0x0" 00:11:40.222 }, 00:11:40.222 "nvmf_tcp": { 00:11:40.222 "mask": "0x20", 00:11:40.222 "tpoint_mask": "0x0" 00:11:40.222 }, 00:11:40.222 "ftl": { 00:11:40.222 "mask": "0x40", 00:11:40.222 "tpoint_mask": "0x0" 00:11:40.222 }, 00:11:40.222 "blobfs": { 00:11:40.222 "mask": "0x80", 00:11:40.222 "tpoint_mask": "0x0" 00:11:40.222 }, 00:11:40.222 "dsa": { 00:11:40.222 "mask": "0x200", 00:11:40.222 "tpoint_mask": "0x0" 00:11:40.222 }, 00:11:40.222 "thread": { 00:11:40.222 "mask": "0x400", 00:11:40.222 "tpoint_mask": "0x0" 00:11:40.222 }, 00:11:40.222 "nvme_pcie": { 00:11:40.222 "mask": "0x800", 00:11:40.222 "tpoint_mask": "0x0" 00:11:40.222 }, 00:11:40.222 "iaa": { 00:11:40.222 "mask": "0x1000", 00:11:40.222 "tpoint_mask": "0x0" 00:11:40.222 }, 00:11:40.222 "nvme_tcp": { 00:11:40.222 "mask": "0x2000", 00:11:40.222 "tpoint_mask": "0x0" 00:11:40.222 }, 00:11:40.222 "bdev_nvme": { 00:11:40.222 "mask": "0x4000", 00:11:40.222 "tpoint_mask": "0x0" 00:11:40.222 }, 00:11:40.222 "sock": { 00:11:40.222 "mask": "0x8000", 00:11:40.222 "tpoint_mask": "0x0" 00:11:40.222 } 00:11:40.222 }' 00:11:40.222 13:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:11:40.222 13:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:11:40.222 13:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:11:40.222 13:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:11:40.222 13:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:11:40.222 13:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:11:40.222 13:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:11:40.222 13:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:11:40.222 13:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:11:40.485 13:29:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:11:40.485 00:11:40.485 real 0m0.258s 00:11:40.485 user 0m0.210s 00:11:40.485 sys 0m0.033s 00:11:40.485 13:29:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:40.485 13:29:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.485 ************************************ 00:11:40.485 END TEST rpc_trace_cmd_test 00:11:40.485 ************************************ 00:11:40.485 13:29:53 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:11:40.485 13:29:53 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:11:40.485 13:29:53 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:11:40.485 13:29:53 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:40.485 13:29:53 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:40.485 13:29:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.485 ************************************ 00:11:40.485 START TEST rpc_daemon_integrity 00:11:40.485 ************************************ 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:40.485 { 00:11:40.485 "name": "Malloc2", 00:11:40.485 "aliases": [ 00:11:40.485 "e7705bee-493c-47c1-b15d-f9dc56d4bd80" 00:11:40.485 ], 00:11:40.485 "product_name": "Malloc disk", 00:11:40.485 "block_size": 512, 00:11:40.485 "num_blocks": 16384, 00:11:40.485 "uuid": "e7705bee-493c-47c1-b15d-f9dc56d4bd80", 00:11:40.485 "assigned_rate_limits": { 00:11:40.485 "rw_ios_per_sec": 0, 00:11:40.485 "rw_mbytes_per_sec": 0, 00:11:40.485 "r_mbytes_per_sec": 0, 00:11:40.485 "w_mbytes_per_sec": 0 00:11:40.485 }, 00:11:40.485 "claimed": false, 00:11:40.485 "zoned": false, 00:11:40.485 "supported_io_types": { 00:11:40.485 "read": true, 00:11:40.485 "write": true, 00:11:40.485 "unmap": true, 00:11:40.485 "write_zeroes": true, 00:11:40.485 "flush": true, 00:11:40.485 "reset": true, 00:11:40.485 "compare": false, 00:11:40.485 "compare_and_write": false, 00:11:40.485 "abort": true, 00:11:40.485 "nvme_admin": false, 00:11:40.485 "nvme_io": false 00:11:40.485 }, 00:11:40.485 "memory_domains": [ 00:11:40.485 { 00:11:40.485 "dma_device_id": "system", 00:11:40.485 "dma_device_type": 1 00:11:40.485 }, 00:11:40.485 { 00:11:40.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.485 "dma_device_type": 2 00:11:40.485 } 00:11:40.485 ], 00:11:40.485 "driver_specific": {} 00:11:40.485 } 00:11:40.485 ]' 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:40.485 [2024-05-15 13:29:53.544352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:11:40.485 [2024-05-15 13:29:53.544579] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.485 [2024-05-15 13:29:53.544633] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x970960 00:11:40.485 [2024-05-15 13:29:53.544789] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.485 [2024-05-15 13:29:53.546402] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.485 [2024-05-15 13:29:53.546554] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:40.485 Passthru0 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:40.485 { 00:11:40.485 "name": "Malloc2", 00:11:40.485 "aliases": [ 00:11:40.485 "e7705bee-493c-47c1-b15d-f9dc56d4bd80" 00:11:40.485 ], 00:11:40.485 "product_name": "Malloc disk", 00:11:40.485 "block_size": 512, 00:11:40.485 "num_blocks": 16384, 00:11:40.485 "uuid": "e7705bee-493c-47c1-b15d-f9dc56d4bd80", 00:11:40.485 "assigned_rate_limits": { 00:11:40.485 "rw_ios_per_sec": 0, 00:11:40.485 "rw_mbytes_per_sec": 0, 00:11:40.485 "r_mbytes_per_sec": 0, 00:11:40.485 "w_mbytes_per_sec": 0 00:11:40.485 }, 00:11:40.485 "claimed": true, 00:11:40.485 "claim_type": "exclusive_write", 00:11:40.485 "zoned": false, 00:11:40.485 "supported_io_types": { 00:11:40.485 "read": true, 00:11:40.485 "write": true, 00:11:40.485 "unmap": true, 00:11:40.485 "write_zeroes": true, 00:11:40.485 "flush": true, 00:11:40.485 "reset": true, 00:11:40.485 "compare": false, 00:11:40.485 "compare_and_write": false, 00:11:40.485 "abort": true, 00:11:40.485 "nvme_admin": false, 00:11:40.485 "nvme_io": false 00:11:40.485 }, 00:11:40.485 "memory_domains": [ 00:11:40.485 { 00:11:40.485 "dma_device_id": "system", 00:11:40.485 "dma_device_type": 1 00:11:40.485 }, 00:11:40.485 { 00:11:40.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.485 "dma_device_type": 2 00:11:40.485 } 00:11:40.485 ], 00:11:40.485 "driver_specific": {} 00:11:40.485 }, 00:11:40.485 { 00:11:40.485 "name": "Passthru0", 00:11:40.485 "aliases": [ 00:11:40.485 "c2ebb79d-fb4d-5bc1-8755-24608593c83c" 00:11:40.485 ], 00:11:40.485 "product_name": "passthru", 00:11:40.485 "block_size": 512, 00:11:40.485 "num_blocks": 16384, 00:11:40.485 "uuid": "c2ebb79d-fb4d-5bc1-8755-24608593c83c", 00:11:40.485 "assigned_rate_limits": { 00:11:40.485 "rw_ios_per_sec": 0, 00:11:40.485 "rw_mbytes_per_sec": 0, 00:11:40.485 "r_mbytes_per_sec": 0, 00:11:40.485 "w_mbytes_per_sec": 0 00:11:40.485 }, 00:11:40.485 "claimed": false, 00:11:40.485 "zoned": false, 00:11:40.485 "supported_io_types": { 00:11:40.485 "read": true, 00:11:40.485 "write": true, 00:11:40.485 "unmap": true, 00:11:40.485 "write_zeroes": true, 00:11:40.485 "flush": true, 00:11:40.485 "reset": true, 00:11:40.485 "compare": false, 00:11:40.485 "compare_and_write": false, 00:11:40.485 "abort": true, 00:11:40.485 "nvme_admin": false, 00:11:40.485 "nvme_io": false 00:11:40.485 }, 00:11:40.485 "memory_domains": [ 00:11:40.485 { 00:11:40.485 "dma_device_id": "system", 00:11:40.485 "dma_device_type": 1 00:11:40.485 }, 00:11:40.485 { 00:11:40.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.485 "dma_device_type": 2 00:11:40.485 } 00:11:40.485 ], 00:11:40.485 "driver_specific": { 00:11:40.485 "passthru": { 00:11:40.485 "name": "Passthru0", 00:11:40.485 "base_bdev_name": "Malloc2" 00:11:40.485 } 00:11:40.485 } 00:11:40.485 } 00:11:40.485 ]' 00:11:40.485 13:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:40.744 13:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:40.744 13:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:40.744 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.744 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:40.744 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.744 13:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:11:40.744 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.744 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:40.744 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.744 13:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:40.744 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.744 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:40.744 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.744 13:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:40.744 13:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:40.744 13:29:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:40.744 00:11:40.744 real 0m0.284s 00:11:40.744 user 0m0.177s 00:11:40.744 sys 0m0.038s 00:11:40.744 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:40.744 13:29:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:40.744 ************************************ 00:11:40.744 END TEST rpc_daemon_integrity 00:11:40.744 ************************************ 00:11:40.744 13:29:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:11:40.744 13:29:53 rpc -- rpc/rpc.sh@84 -- # killprocess 71870 00:11:40.744 13:29:53 rpc -- common/autotest_common.sh@946 -- # '[' -z 71870 ']' 00:11:40.744 13:29:53 rpc -- common/autotest_common.sh@950 -- # kill -0 71870 00:11:40.744 13:29:53 rpc -- common/autotest_common.sh@951 -- # uname 00:11:40.744 13:29:53 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:40.744 13:29:53 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71870 00:11:40.744 13:29:53 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:40.744 13:29:53 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:40.744 13:29:53 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71870' 00:11:40.744 killing process with pid 71870 00:11:40.744 13:29:53 rpc -- common/autotest_common.sh@965 -- # kill 71870 00:11:40.744 13:29:53 rpc -- common/autotest_common.sh@970 -- # wait 71870 00:11:41.020 00:11:41.020 real 0m2.728s 00:11:41.020 user 0m3.507s 00:11:41.020 sys 0m0.659s 00:11:41.020 13:29:54 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:41.020 13:29:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.020 ************************************ 00:11:41.020 END TEST rpc 00:11:41.020 ************************************ 00:11:41.279 13:29:54 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:11:41.279 13:29:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:41.279 13:29:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:41.279 13:29:54 -- common/autotest_common.sh@10 -- # set +x 00:11:41.279 ************************************ 00:11:41.279 START TEST skip_rpc 00:11:41.279 ************************************ 00:11:41.279 13:29:54 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:11:41.279 * Looking for test storage... 00:11:41.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:41.279 13:29:54 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:41.279 13:29:54 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:41.279 13:29:54 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:11:41.279 13:29:54 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:41.279 13:29:54 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:41.279 13:29:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.279 ************************************ 00:11:41.279 START TEST skip_rpc 00:11:41.279 ************************************ 00:11:41.279 13:29:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:11:41.279 13:29:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=72064 00:11:41.279 13:29:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:41.279 13:29:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:11:41.279 13:29:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:11:41.279 [2024-05-15 13:29:54.297473] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:11:41.279 [2024-05-15 13:29:54.297928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72064 ] 00:11:41.540 [2024-05-15 13:29:54.428414] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:41.540 [2024-05-15 13:29:54.442356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.540 [2024-05-15 13:29:54.521668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 72064 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 72064 ']' 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 72064 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72064 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72064' 00:11:46.845 killing process with pid 72064 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 72064 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 72064 00:11:46.845 00:11:46.845 real 0m5.380s 00:11:46.845 user 0m4.971s 00:11:46.845 sys 0m0.294s 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:46.845 13:29:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.845 ************************************ 00:11:46.845 END TEST skip_rpc 00:11:46.845 ************************************ 00:11:46.845 13:29:59 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:11:46.845 13:29:59 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:46.845 13:29:59 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:46.845 13:29:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:46.845 ************************************ 00:11:46.845 START TEST skip_rpc_with_json 00:11:46.845 ************************************ 00:11:46.845 13:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:11:46.845 13:29:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:11:46.845 13:29:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=72145 00:11:46.845 13:29:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:46.845 13:29:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:46.845 13:29:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 72145 00:11:46.845 13:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 72145 ']' 00:11:46.845 13:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.845 13:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:46.845 13:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.845 13:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:46.845 13:29:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:46.845 [2024-05-15 13:29:59.718345] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:11:46.845 [2024-05-15 13:29:59.718687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72145 ] 00:11:46.845 [2024-05-15 13:29:59.843450] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:46.845 [2024-05-15 13:29:59.860100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.845 [2024-05-15 13:29:59.918366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.796 13:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:47.796 13:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:11:47.796 13:30:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:11:47.796 13:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.796 13:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:47.796 [2024-05-15 13:30:00.650139] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:11:47.796 request: 00:11:47.796 { 00:11:47.796 "trtype": "tcp", 00:11:47.796 "method": "nvmf_get_transports", 00:11:47.796 "req_id": 1 00:11:47.796 } 00:11:47.796 Got JSON-RPC error response 00:11:47.796 response: 00:11:47.796 { 00:11:47.796 "code": -19, 00:11:47.796 "message": "No such device" 00:11:47.796 } 00:11:47.796 13:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:47.796 13:30:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:11:47.796 13:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.796 13:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:47.796 [2024-05-15 13:30:00.662266] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.796 13:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.796 13:30:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:11:47.796 13:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.796 13:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:47.796 13:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.796 13:30:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:47.796 { 00:11:47.796 "subsystems": [ 00:11:47.796 { 00:11:47.796 "subsystem": "keyring", 00:11:47.796 "config": [] 00:11:47.796 }, 00:11:47.796 { 00:11:47.796 "subsystem": "iobuf", 00:11:47.796 "config": [ 00:11:47.796 { 00:11:47.796 "method": "iobuf_set_options", 00:11:47.796 "params": { 00:11:47.796 "small_pool_count": 8192, 00:11:47.796 "large_pool_count": 1024, 00:11:47.796 "small_bufsize": 8192, 00:11:47.796 "large_bufsize": 135168 00:11:47.796 } 00:11:47.796 } 00:11:47.796 ] 00:11:47.796 }, 00:11:47.796 { 00:11:47.796 "subsystem": "sock", 00:11:47.796 "config": [ 00:11:47.796 { 00:11:47.796 "method": "sock_impl_set_options", 00:11:47.796 "params": { 00:11:47.796 "impl_name": "uring", 00:11:47.796 "recv_buf_size": 2097152, 00:11:47.796 "send_buf_size": 2097152, 00:11:47.796 "enable_recv_pipe": true, 00:11:47.796 "enable_quickack": false, 00:11:47.796 "enable_placement_id": 0, 00:11:47.796 "enable_zerocopy_send_server": false, 00:11:47.796 "enable_zerocopy_send_client": false, 00:11:47.796 "zerocopy_threshold": 0, 00:11:47.796 "tls_version": 0, 00:11:47.796 "enable_ktls": false 00:11:47.796 } 00:11:47.796 }, 00:11:47.796 { 00:11:47.796 "method": "sock_impl_set_options", 00:11:47.796 "params": { 00:11:47.796 "impl_name": "posix", 00:11:47.796 "recv_buf_size": 2097152, 00:11:47.796 "send_buf_size": 2097152, 00:11:47.796 "enable_recv_pipe": true, 00:11:47.796 "enable_quickack": false, 00:11:47.796 "enable_placement_id": 0, 00:11:47.796 "enable_zerocopy_send_server": true, 00:11:47.796 "enable_zerocopy_send_client": false, 00:11:47.796 "zerocopy_threshold": 0, 00:11:47.796 "tls_version": 0, 00:11:47.796 "enable_ktls": false 00:11:47.796 } 00:11:47.796 }, 00:11:47.796 { 00:11:47.796 "method": "sock_impl_set_options", 00:11:47.796 "params": { 00:11:47.796 "impl_name": "ssl", 00:11:47.796 "recv_buf_size": 4096, 00:11:47.796 "send_buf_size": 4096, 00:11:47.796 "enable_recv_pipe": true, 00:11:47.796 "enable_quickack": false, 00:11:47.796 "enable_placement_id": 0, 00:11:47.796 "enable_zerocopy_send_server": true, 00:11:47.796 "enable_zerocopy_send_client": false, 00:11:47.796 "zerocopy_threshold": 0, 00:11:47.796 "tls_version": 0, 00:11:47.796 "enable_ktls": false 00:11:47.796 } 00:11:47.796 } 00:11:47.796 ] 00:11:47.796 }, 00:11:47.796 { 00:11:47.796 "subsystem": "vmd", 00:11:47.796 "config": [] 00:11:47.796 }, 00:11:47.796 { 00:11:47.796 "subsystem": "accel", 00:11:47.796 "config": [ 00:11:47.796 { 00:11:47.796 "method": "accel_set_options", 00:11:47.796 "params": { 00:11:47.796 "small_cache_size": 128, 00:11:47.796 "large_cache_size": 16, 00:11:47.796 "task_count": 2048, 00:11:47.796 "sequence_count": 2048, 00:11:47.796 "buf_count": 2048 00:11:47.796 } 00:11:47.796 } 00:11:47.796 ] 00:11:47.796 }, 00:11:47.796 { 00:11:47.796 "subsystem": "bdev", 00:11:47.796 "config": [ 00:11:47.796 { 00:11:47.796 "method": "bdev_set_options", 00:11:47.796 "params": { 00:11:47.796 "bdev_io_pool_size": 65535, 00:11:47.796 "bdev_io_cache_size": 256, 00:11:47.796 "bdev_auto_examine": true, 00:11:47.796 "iobuf_small_cache_size": 128, 00:11:47.796 "iobuf_large_cache_size": 16 00:11:47.796 } 00:11:47.796 }, 00:11:47.796 { 00:11:47.796 "method": "bdev_raid_set_options", 00:11:47.796 "params": { 00:11:47.796 "process_window_size_kb": 1024 00:11:47.796 } 00:11:47.796 }, 00:11:47.796 { 00:11:47.796 "method": "bdev_iscsi_set_options", 00:11:47.796 "params": { 00:11:47.796 "timeout_sec": 30 00:11:47.796 } 00:11:47.796 }, 00:11:47.796 { 00:11:47.796 "method": "bdev_nvme_set_options", 00:11:47.796 "params": { 00:11:47.796 "action_on_timeout": "none", 00:11:47.796 "timeout_us": 0, 00:11:47.796 "timeout_admin_us": 0, 00:11:47.796 "keep_alive_timeout_ms": 10000, 00:11:47.796 "arbitration_burst": 0, 00:11:47.796 "low_priority_weight": 0, 00:11:47.796 "medium_priority_weight": 0, 00:11:47.796 "high_priority_weight": 0, 00:11:47.796 "nvme_adminq_poll_period_us": 10000, 00:11:47.796 "nvme_ioq_poll_period_us": 0, 00:11:47.796 "io_queue_requests": 0, 00:11:47.796 "delay_cmd_submit": true, 00:11:47.796 "transport_retry_count": 4, 00:11:47.796 "bdev_retry_count": 3, 00:11:47.796 "transport_ack_timeout": 0, 00:11:47.796 "ctrlr_loss_timeout_sec": 0, 00:11:47.796 "reconnect_delay_sec": 0, 00:11:47.796 "fast_io_fail_timeout_sec": 0, 00:11:47.796 "disable_auto_failback": false, 00:11:47.796 "generate_uuids": false, 00:11:47.796 "transport_tos": 0, 00:11:47.796 "nvme_error_stat": false, 00:11:47.796 "rdma_srq_size": 0, 00:11:47.796 "io_path_stat": false, 00:11:47.796 "allow_accel_sequence": false, 00:11:47.796 "rdma_max_cq_size": 0, 00:11:47.796 "rdma_cm_event_timeout_ms": 0, 00:11:47.796 "dhchap_digests": [ 00:11:47.796 "sha256", 00:11:47.796 "sha384", 00:11:47.796 "sha512" 00:11:47.796 ], 00:11:47.796 "dhchap_dhgroups": [ 00:11:47.796 "null", 00:11:47.796 "ffdhe2048", 00:11:47.796 "ffdhe3072", 00:11:47.796 "ffdhe4096", 00:11:47.796 "ffdhe6144", 00:11:47.796 "ffdhe8192" 00:11:47.796 ] 00:11:47.796 } 00:11:47.796 }, 00:11:47.796 { 00:11:47.796 "method": "bdev_nvme_set_hotplug", 00:11:47.796 "params": { 00:11:47.796 "period_us": 100000, 00:11:47.796 "enable": false 00:11:47.796 } 00:11:47.796 }, 00:11:47.796 { 00:11:47.796 "method": "bdev_wait_for_examine" 00:11:47.796 } 00:11:47.796 ] 00:11:47.796 }, 00:11:47.796 { 00:11:47.796 "subsystem": "scsi", 00:11:47.796 "config": null 00:11:47.796 }, 00:11:47.796 { 00:11:47.797 "subsystem": "scheduler", 00:11:47.797 "config": [ 00:11:47.797 { 00:11:47.797 "method": "framework_set_scheduler", 00:11:47.797 "params": { 00:11:47.797 "name": "static" 00:11:47.797 } 00:11:47.797 } 00:11:47.797 ] 00:11:47.797 }, 00:11:47.797 { 00:11:47.797 "subsystem": "vhost_scsi", 00:11:47.797 "config": [] 00:11:47.797 }, 00:11:47.797 { 00:11:47.797 "subsystem": "vhost_blk", 00:11:47.797 "config": [] 00:11:47.797 }, 00:11:47.797 { 00:11:47.797 "subsystem": "ublk", 00:11:47.797 "config": [] 00:11:47.797 }, 00:11:47.797 { 00:11:47.797 "subsystem": "nbd", 00:11:47.797 "config": [] 00:11:47.797 }, 00:11:47.797 { 00:11:47.797 "subsystem": "nvmf", 00:11:47.797 "config": [ 00:11:47.797 { 00:11:47.797 "method": "nvmf_set_config", 00:11:47.797 "params": { 00:11:47.797 "discovery_filter": "match_any", 00:11:47.797 "admin_cmd_passthru": { 00:11:47.797 "identify_ctrlr": false 00:11:47.797 } 00:11:47.797 } 00:11:47.797 }, 00:11:47.797 { 00:11:47.797 "method": "nvmf_set_max_subsystems", 00:11:47.797 "params": { 00:11:47.797 "max_subsystems": 1024 00:11:47.797 } 00:11:47.797 }, 00:11:47.797 { 00:11:47.797 "method": "nvmf_set_crdt", 00:11:47.797 "params": { 00:11:47.797 "crdt1": 0, 00:11:47.797 "crdt2": 0, 00:11:47.797 "crdt3": 0 00:11:47.797 } 00:11:47.797 }, 00:11:47.797 { 00:11:47.797 "method": "nvmf_create_transport", 00:11:47.797 "params": { 00:11:47.797 "trtype": "TCP", 00:11:47.797 "max_queue_depth": 128, 00:11:47.797 "max_io_qpairs_per_ctrlr": 127, 00:11:47.797 "in_capsule_data_size": 4096, 00:11:47.797 "max_io_size": 131072, 00:11:47.797 "io_unit_size": 131072, 00:11:47.797 "max_aq_depth": 128, 00:11:47.797 "num_shared_buffers": 511, 00:11:47.797 "buf_cache_size": 4294967295, 00:11:47.797 "dif_insert_or_strip": false, 00:11:47.797 "zcopy": false, 00:11:47.797 "c2h_success": true, 00:11:47.797 "sock_priority": 0, 00:11:47.797 "abort_timeout_sec": 1, 00:11:47.797 "ack_timeout": 0, 00:11:47.797 "data_wr_pool_size": 0 00:11:47.797 } 00:11:47.797 } 00:11:47.797 ] 00:11:47.797 }, 00:11:47.797 { 00:11:47.797 "subsystem": "iscsi", 00:11:47.797 "config": [ 00:11:47.797 { 00:11:47.797 "method": "iscsi_set_options", 00:11:47.797 "params": { 00:11:47.797 "node_base": "iqn.2016-06.io.spdk", 00:11:47.797 "max_sessions": 128, 00:11:47.797 "max_connections_per_session": 2, 00:11:47.797 "max_queue_depth": 64, 00:11:47.797 "default_time2wait": 2, 00:11:47.797 "default_time2retain": 20, 00:11:47.797 "first_burst_length": 8192, 00:11:47.797 "immediate_data": true, 00:11:47.797 "allow_duplicated_isid": false, 00:11:47.797 "error_recovery_level": 0, 00:11:47.797 "nop_timeout": 60, 00:11:47.797 "nop_in_interval": 30, 00:11:47.797 "disable_chap": false, 00:11:47.797 "require_chap": false, 00:11:47.797 "mutual_chap": false, 00:11:47.797 "chap_group": 0, 00:11:47.797 "max_large_datain_per_connection": 64, 00:11:47.797 "max_r2t_per_connection": 4, 00:11:47.797 "pdu_pool_size": 36864, 00:11:47.797 "immediate_data_pool_size": 16384, 00:11:47.797 "data_out_pool_size": 2048 00:11:47.797 } 00:11:47.797 } 00:11:47.797 ] 00:11:47.797 } 00:11:47.797 ] 00:11:47.797 } 00:11:47.797 13:30:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:47.797 13:30:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 72145 00:11:47.797 13:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 72145 ']' 00:11:47.797 13:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 72145 00:11:47.797 13:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:11:47.797 13:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:47.797 13:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72145 00:11:47.797 13:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:47.797 13:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:47.797 13:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72145' 00:11:47.797 killing process with pid 72145 00:11:47.797 13:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 72145 00:11:47.797 13:30:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 72145 00:11:48.386 13:30:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=72178 00:11:48.386 13:30:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:48.386 13:30:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 72178 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 72178 ']' 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 72178 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72178 00:11:53.680 killing process with pid 72178 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72178' 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 72178 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 72178 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:53.680 00:11:53.680 real 0m6.938s 00:11:53.680 user 0m6.645s 00:11:53.680 sys 0m0.622s 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:53.680 ************************************ 00:11:53.680 END TEST skip_rpc_with_json 00:11:53.680 ************************************ 00:11:53.680 13:30:06 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:11:53.680 13:30:06 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:53.680 13:30:06 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:53.680 13:30:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.680 ************************************ 00:11:53.680 START TEST skip_rpc_with_delay 00:11:53.680 ************************************ 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.680 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:53.681 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:53.681 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:53.681 [2024-05-15 13:30:06.723221] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:11:53.681 [2024-05-15 13:30:06.723807] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:11:53.681 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:11:53.681 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:53.681 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:53.681 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:53.681 00:11:53.681 real 0m0.109s 00:11:53.681 user 0m0.060s 00:11:53.681 sys 0m0.044s 00:11:53.681 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:53.681 13:30:06 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:11:53.681 ************************************ 00:11:53.681 END TEST skip_rpc_with_delay 00:11:53.681 ************************************ 00:11:53.938 13:30:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:11:53.938 13:30:06 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:11:53.938 13:30:06 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:11:53.938 13:30:06 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:53.938 13:30:06 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:53.938 13:30:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.938 ************************************ 00:11:53.938 START TEST exit_on_failed_rpc_init 00:11:53.938 ************************************ 00:11:53.938 13:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:11:53.938 13:30:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=72282 00:11:53.938 13:30:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:53.938 13:30:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 72282 00:11:53.938 13:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 72282 ']' 00:11:53.938 13:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.938 13:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:53.938 13:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.938 13:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:53.938 13:30:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:53.938 [2024-05-15 13:30:06.870998] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:11:53.938 [2024-05-15 13:30:06.871329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72282 ] 00:11:53.938 [2024-05-15 13:30:06.997944] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:53.938 [2024-05-15 13:30:07.016879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.196 [2024-05-15 13:30:07.072862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.196 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:54.196 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:11:54.196 13:30:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:54.196 13:30:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:54.196 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:11:54.196 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:54.196 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:54.196 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:54.196 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:54.196 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:54.196 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:54.196 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:54.196 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:54.196 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:54.196 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:54.453 [2024-05-15 13:30:07.359060] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:11:54.453 [2024-05-15 13:30:07.359557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72298 ] 00:11:54.453 [2024-05-15 13:30:07.487397] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:54.453 [2024-05-15 13:30:07.500936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.712 [2024-05-15 13:30:07.579077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.712 [2024-05-15 13:30:07.579557] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:54.712 [2024-05-15 13:30:07.579773] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:54.712 [2024-05-15 13:30:07.579888] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:54.712 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:11:54.712 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:54.712 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:11:54.712 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:11:54.712 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:11:54.712 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:54.712 13:30:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:54.712 13:30:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 72282 00:11:54.712 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 72282 ']' 00:11:54.712 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 72282 00:11:54.712 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:11:54.712 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:54.712 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72282 00:11:54.712 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:54.712 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:54.712 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72282' 00:11:54.712 killing process with pid 72282 00:11:54.712 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 72282 00:11:54.712 13:30:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 72282 00:11:54.970 00:11:54.970 real 0m1.230s 00:11:54.970 user 0m1.382s 00:11:54.970 sys 0m0.364s 00:11:54.970 13:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:54.970 13:30:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:11:54.970 ************************************ 00:11:54.970 END TEST exit_on_failed_rpc_init 00:11:54.970 ************************************ 00:11:55.228 13:30:08 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:55.228 00:11:55.228 real 0m13.938s 00:11:55.228 user 0m13.157s 00:11:55.228 sys 0m1.498s 00:11:55.228 13:30:08 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:55.228 ************************************ 00:11:55.228 END TEST skip_rpc 00:11:55.228 ************************************ 00:11:55.228 13:30:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.228 13:30:08 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:55.228 13:30:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:55.228 13:30:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:55.228 13:30:08 -- common/autotest_common.sh@10 -- # set +x 00:11:55.228 ************************************ 00:11:55.228 START TEST rpc_client 00:11:55.228 ************************************ 00:11:55.228 13:30:08 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:55.228 * Looking for test storage... 00:11:55.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:11:55.228 13:30:08 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:11:55.228 OK 00:11:55.228 13:30:08 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:11:55.228 00:11:55.228 real 0m0.096s 00:11:55.228 user 0m0.036s 00:11:55.228 sys 0m0.068s 00:11:55.228 13:30:08 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:55.228 ************************************ 00:11:55.228 END TEST rpc_client 00:11:55.228 ************************************ 00:11:55.228 13:30:08 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:11:55.228 13:30:08 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:55.228 13:30:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:55.229 13:30:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:55.229 13:30:08 -- common/autotest_common.sh@10 -- # set +x 00:11:55.229 ************************************ 00:11:55.229 START TEST json_config 00:11:55.229 ************************************ 00:11:55.229 13:30:08 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:55.488 13:30:08 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@7 -- # uname -s 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:55.488 13:30:08 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.488 13:30:08 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.488 13:30:08 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.488 13:30:08 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.488 13:30:08 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.488 13:30:08 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.488 13:30:08 json_config -- paths/export.sh@5 -- # export PATH 00:11:55.488 13:30:08 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@47 -- # : 0 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:55.488 13:30:08 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:55.488 13:30:08 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:55.488 13:30:08 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:11:55.488 13:30:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:11:55.488 13:30:08 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:11:55.488 13:30:08 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:11:55.488 13:30:08 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:11:55.488 13:30:08 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:11:55.488 13:30:08 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:11:55.488 13:30:08 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:11:55.488 13:30:08 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:11:55.488 13:30:08 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:11:55.488 13:30:08 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:11:55.488 13:30:08 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:11:55.488 13:30:08 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:11:55.488 13:30:08 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:55.488 INFO: JSON configuration test init 00:11:55.488 13:30:08 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:11:55.488 13:30:08 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:11:55.488 13:30:08 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:11:55.488 13:30:08 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:55.488 13:30:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:55.488 13:30:08 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:11:55.488 13:30:08 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:55.488 13:30:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:55.488 13:30:08 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:11:55.488 13:30:08 json_config -- json_config/common.sh@9 -- # local app=target 00:11:55.488 13:30:08 json_config -- json_config/common.sh@10 -- # shift 00:11:55.488 13:30:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:55.488 13:30:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:55.488 13:30:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:11:55.488 13:30:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:55.488 13:30:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:55.488 13:30:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=72416 00:11:55.488 Waiting for target to run... 00:11:55.488 13:30:08 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:11:55.489 13:30:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:55.489 13:30:08 json_config -- json_config/common.sh@25 -- # waitforlisten 72416 /var/tmp/spdk_tgt.sock 00:11:55.489 13:30:08 json_config -- common/autotest_common.sh@827 -- # '[' -z 72416 ']' 00:11:55.489 13:30:08 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:55.489 13:30:08 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:55.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:55.489 13:30:08 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:55.489 13:30:08 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:55.489 13:30:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:55.489 [2024-05-15 13:30:08.429360] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:11:55.489 [2024-05-15 13:30:08.429476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72416 ] 00:11:55.747 [2024-05-15 13:30:08.806687] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:55.747 [2024-05-15 13:30:08.823413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.004 [2024-05-15 13:30:08.857277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.573 13:30:09 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:56.573 00:11:56.573 13:30:09 json_config -- common/autotest_common.sh@860 -- # return 0 00:11:56.573 13:30:09 json_config -- json_config/common.sh@26 -- # echo '' 00:11:56.573 13:30:09 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:11:56.573 13:30:09 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:11:56.573 13:30:09 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:56.573 13:30:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:56.573 13:30:09 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:11:56.573 13:30:09 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:11:56.573 13:30:09 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:56.573 13:30:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:56.573 13:30:09 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:11:56.573 13:30:09 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:11:56.573 13:30:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:11:56.830 13:30:09 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:11:56.830 13:30:09 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:11:56.830 13:30:09 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:56.830 13:30:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:56.830 13:30:09 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:11:56.830 13:30:09 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:11:56.830 13:30:09 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:11:56.830 13:30:09 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:11:56.830 13:30:09 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:11:56.830 13:30:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:11:57.088 13:30:10 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:11:57.088 13:30:10 json_config -- json_config/json_config.sh@48 -- # local get_types 00:11:57.088 13:30:10 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:11:57.088 13:30:10 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:11:57.088 13:30:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:57.088 13:30:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:57.088 13:30:10 json_config -- json_config/json_config.sh@55 -- # return 0 00:11:57.088 13:30:10 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:11:57.088 13:30:10 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:11:57.088 13:30:10 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:11:57.088 13:30:10 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:11:57.088 13:30:10 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:11:57.088 13:30:10 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:11:57.088 13:30:10 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:57.088 13:30:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:57.088 13:30:10 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:11:57.088 13:30:10 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:11:57.088 13:30:10 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:11:57.088 13:30:10 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:11:57.088 13:30:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:11:57.346 MallocForNvmf0 00:11:57.346 13:30:10 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:11:57.346 13:30:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:11:57.603 MallocForNvmf1 00:11:57.603 13:30:10 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:11:57.603 13:30:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:11:57.860 [2024-05-15 13:30:10.878950] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.860 13:30:10 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:57.860 13:30:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:58.118 13:30:11 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:11:58.118 13:30:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:11:58.376 13:30:11 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:11:58.376 13:30:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:11:58.633 13:30:11 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:11:58.633 13:30:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:11:58.930 [2024-05-15 13:30:11.939316] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:58.930 [2024-05-15 13:30:11.939621] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:11:58.930 13:30:11 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:11:58.930 13:30:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:58.930 13:30:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:58.930 13:30:11 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:11:58.930 13:30:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:58.930 13:30:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:59.208 13:30:12 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:11:59.208 13:30:12 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:11:59.208 13:30:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:11:59.208 MallocBdevForConfigChangeCheck 00:11:59.208 13:30:12 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:11:59.208 13:30:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:59.208 13:30:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:11:59.208 13:30:12 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:11:59.208 13:30:12 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:59.773 INFO: shutting down applications... 00:11:59.773 13:30:12 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:11:59.773 13:30:12 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:11:59.773 13:30:12 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:11:59.773 13:30:12 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:11:59.773 13:30:12 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:12:00.032 Calling clear_iscsi_subsystem 00:12:00.032 Calling clear_nvmf_subsystem 00:12:00.032 Calling clear_nbd_subsystem 00:12:00.032 Calling clear_ublk_subsystem 00:12:00.032 Calling clear_vhost_blk_subsystem 00:12:00.032 Calling clear_vhost_scsi_subsystem 00:12:00.032 Calling clear_bdev_subsystem 00:12:00.032 13:30:12 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:12:00.032 13:30:12 json_config -- json_config/json_config.sh@343 -- # count=100 00:12:00.032 13:30:12 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:12:00.032 13:30:12 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:00.032 13:30:12 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:12:00.032 13:30:12 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:12:00.289 13:30:13 json_config -- json_config/json_config.sh@345 -- # break 00:12:00.289 13:30:13 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:12:00.289 13:30:13 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:12:00.289 13:30:13 json_config -- json_config/common.sh@31 -- # local app=target 00:12:00.289 13:30:13 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:12:00.289 13:30:13 json_config -- json_config/common.sh@35 -- # [[ -n 72416 ]] 00:12:00.289 13:30:13 json_config -- json_config/common.sh@38 -- # kill -SIGINT 72416 00:12:00.289 [2024-05-15 13:30:13.358827] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:00.289 13:30:13 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:12:00.289 13:30:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:00.289 13:30:13 json_config -- json_config/common.sh@41 -- # kill -0 72416 00:12:00.289 13:30:13 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:12:00.852 13:30:13 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:12:00.852 13:30:13 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:00.852 SPDK target shutdown done 00:12:00.852 13:30:13 json_config -- json_config/common.sh@41 -- # kill -0 72416 00:12:00.852 13:30:13 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:12:00.852 13:30:13 json_config -- json_config/common.sh@43 -- # break 00:12:00.852 13:30:13 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:12:00.852 13:30:13 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:12:00.852 INFO: relaunching applications... 00:12:00.852 13:30:13 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:12:00.852 13:30:13 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:00.852 13:30:13 json_config -- json_config/common.sh@9 -- # local app=target 00:12:00.852 13:30:13 json_config -- json_config/common.sh@10 -- # shift 00:12:00.852 13:30:13 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:12:00.852 13:30:13 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:12:00.852 13:30:13 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:12:00.852 13:30:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:00.852 13:30:13 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:00.852 13:30:13 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=72601 00:12:00.852 13:30:13 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:00.852 Waiting for target to run... 00:12:00.852 13:30:13 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:12:00.852 13:30:13 json_config -- json_config/common.sh@25 -- # waitforlisten 72601 /var/tmp/spdk_tgt.sock 00:12:00.852 13:30:13 json_config -- common/autotest_common.sh@827 -- # '[' -z 72601 ']' 00:12:00.852 13:30:13 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:12:00.852 13:30:13 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:00.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:12:00.852 13:30:13 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:12:00.852 13:30:13 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:00.852 13:30:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:00.852 [2024-05-15 13:30:13.944339] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:00.852 [2024-05-15 13:30:13.944498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72601 ] 00:12:01.440 [2024-05-15 13:30:14.439442] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:01.440 [2024-05-15 13:30:14.454013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.440 [2024-05-15 13:30:14.503631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.718 [2024-05-15 13:30:14.798688] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.976 [2024-05-15 13:30:14.830602] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:01.976 [2024-05-15 13:30:14.830838] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:12:01.976 13:30:14 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:01.976 13:30:14 json_config -- common/autotest_common.sh@860 -- # return 0 00:12:01.976 00:12:01.976 INFO: Checking if target configuration is the same... 00:12:01.976 13:30:14 json_config -- json_config/common.sh@26 -- # echo '' 00:12:01.976 13:30:14 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:12:01.976 13:30:14 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:12:01.976 13:30:14 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:01.976 13:30:14 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:12:01.976 13:30:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:01.976 + '[' 2 -ne 2 ']' 00:12:01.976 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:12:01.976 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:12:01.976 + rootdir=/home/vagrant/spdk_repo/spdk 00:12:01.976 +++ basename /dev/fd/62 00:12:01.976 ++ mktemp /tmp/62.XXX 00:12:01.976 + tmp_file_1=/tmp/62.HY6 00:12:01.976 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:01.976 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:12:01.976 + tmp_file_2=/tmp/spdk_tgt_config.json.1gF 00:12:01.976 + ret=0 00:12:01.976 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:12:02.542 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:12:02.542 + diff -u /tmp/62.HY6 /tmp/spdk_tgt_config.json.1gF 00:12:02.543 INFO: JSON config files are the same 00:12:02.543 + echo 'INFO: JSON config files are the same' 00:12:02.543 + rm /tmp/62.HY6 /tmp/spdk_tgt_config.json.1gF 00:12:02.543 + exit 0 00:12:02.543 INFO: changing configuration and checking if this can be detected... 00:12:02.543 13:30:15 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:12:02.543 13:30:15 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:12:02.543 13:30:15 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:12:02.543 13:30:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:12:02.800 13:30:15 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:02.800 13:30:15 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:12:02.800 13:30:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:02.800 + '[' 2 -ne 2 ']' 00:12:02.800 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:12:02.800 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:12:02.800 + rootdir=/home/vagrant/spdk_repo/spdk 00:12:02.800 +++ basename /dev/fd/62 00:12:02.800 ++ mktemp /tmp/62.XXX 00:12:02.800 + tmp_file_1=/tmp/62.FKM 00:12:02.800 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:02.800 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:12:02.800 + tmp_file_2=/tmp/spdk_tgt_config.json.w0B 00:12:02.800 + ret=0 00:12:02.800 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:12:03.057 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:12:03.057 + diff -u /tmp/62.FKM /tmp/spdk_tgt_config.json.w0B 00:12:03.057 + ret=1 00:12:03.057 + echo '=== Start of file: /tmp/62.FKM ===' 00:12:03.057 + cat /tmp/62.FKM 00:12:03.316 + echo '=== End of file: /tmp/62.FKM ===' 00:12:03.316 + echo '' 00:12:03.316 + echo '=== Start of file: /tmp/spdk_tgt_config.json.w0B ===' 00:12:03.316 + cat /tmp/spdk_tgt_config.json.w0B 00:12:03.316 + echo '=== End of file: /tmp/spdk_tgt_config.json.w0B ===' 00:12:03.316 + echo '' 00:12:03.316 + rm /tmp/62.FKM /tmp/spdk_tgt_config.json.w0B 00:12:03.316 + exit 1 00:12:03.316 INFO: configuration change detected. 00:12:03.316 13:30:16 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:12:03.316 13:30:16 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:12:03.316 13:30:16 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:12:03.316 13:30:16 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:03.316 13:30:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:03.316 13:30:16 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:12:03.316 13:30:16 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:12:03.316 13:30:16 json_config -- json_config/json_config.sh@317 -- # [[ -n 72601 ]] 00:12:03.316 13:30:16 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:12:03.316 13:30:16 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:12:03.316 13:30:16 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:03.316 13:30:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:03.316 13:30:16 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:12:03.316 13:30:16 json_config -- json_config/json_config.sh@193 -- # uname -s 00:12:03.316 13:30:16 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:12:03.316 13:30:16 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:12:03.316 13:30:16 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:12:03.316 13:30:16 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:12:03.316 13:30:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:03.316 13:30:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:03.316 13:30:16 json_config -- json_config/json_config.sh@323 -- # killprocess 72601 00:12:03.316 13:30:16 json_config -- common/autotest_common.sh@946 -- # '[' -z 72601 ']' 00:12:03.316 13:30:16 json_config -- common/autotest_common.sh@950 -- # kill -0 72601 00:12:03.316 13:30:16 json_config -- common/autotest_common.sh@951 -- # uname 00:12:03.316 13:30:16 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:03.316 13:30:16 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72601 00:12:03.316 killing process with pid 72601 00:12:03.316 13:30:16 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:03.316 13:30:16 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:03.316 13:30:16 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72601' 00:12:03.316 13:30:16 json_config -- common/autotest_common.sh@965 -- # kill 72601 00:12:03.316 [2024-05-15 13:30:16.245739] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:03.316 13:30:16 json_config -- common/autotest_common.sh@970 -- # wait 72601 00:12:03.574 13:30:16 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:03.574 13:30:16 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:12:03.574 13:30:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:03.574 13:30:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:03.574 INFO: Success 00:12:03.574 13:30:16 json_config -- json_config/json_config.sh@328 -- # return 0 00:12:03.574 13:30:16 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:12:03.574 ************************************ 00:12:03.574 END TEST json_config 00:12:03.574 ************************************ 00:12:03.574 00:12:03.574 real 0m8.240s 00:12:03.574 user 0m11.664s 00:12:03.574 sys 0m1.846s 00:12:03.574 13:30:16 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:03.574 13:30:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:03.574 13:30:16 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:12:03.574 13:30:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:03.574 13:30:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:03.574 13:30:16 -- common/autotest_common.sh@10 -- # set +x 00:12:03.574 ************************************ 00:12:03.574 START TEST json_config_extra_key 00:12:03.574 ************************************ 00:12:03.574 13:30:16 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:12:03.574 13:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:03.574 13:30:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:12:03.574 13:30:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.574 13:30:16 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.574 13:30:16 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.574 13:30:16 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.574 13:30:16 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.575 13:30:16 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.575 13:30:16 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.575 13:30:16 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.575 13:30:16 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.575 13:30:16 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.575 13:30:16 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:12:03.575 13:30:16 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:12:03.575 13:30:16 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.575 13:30:16 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.575 13:30:16 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:03.575 13:30:16 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.575 13:30:16 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:03.575 13:30:16 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.575 13:30:16 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.575 13:30:16 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.575 13:30:16 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.575 13:30:16 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.575 13:30:16 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.575 13:30:16 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:12:03.575 13:30:16 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.575 13:30:16 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:12:03.575 13:30:16 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:03.575 13:30:16 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:03.575 13:30:16 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.575 13:30:16 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.575 13:30:16 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.575 13:30:16 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:03.575 13:30:16 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:03.575 13:30:16 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:03.575 13:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:12:03.575 13:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:12:03.575 13:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:12:03.575 13:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:12:03.575 13:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:12:03.575 13:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:12:03.575 13:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:12:03.575 13:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:12:03.575 13:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:12:03.575 13:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:12:03.575 13:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:12:03.575 INFO: launching applications... 00:12:03.575 13:30:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:12:03.575 13:30:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:12:03.575 13:30:16 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:12:03.575 13:30:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:12:03.575 13:30:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:12:03.575 13:30:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:12:03.575 13:30:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:03.575 13:30:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:03.575 13:30:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=72747 00:12:03.575 13:30:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:12:03.575 Waiting for target to run... 00:12:03.575 13:30:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 72747 /var/tmp/spdk_tgt.sock 00:12:03.575 13:30:16 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:12:03.575 13:30:16 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 72747 ']' 00:12:03.575 13:30:16 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:12:03.575 13:30:16 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:03.575 13:30:16 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:12:03.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:12:03.575 13:30:16 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:03.575 13:30:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:12:03.833 [2024-05-15 13:30:16.703297] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:03.833 [2024-05-15 13:30:16.703676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72747 ] 00:12:04.091 [2024-05-15 13:30:17.062732] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:04.091 [2024-05-15 13:30:17.082748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.091 [2024-05-15 13:30:17.117263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.656 13:30:17 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:04.656 13:30:17 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:12:04.656 13:30:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:12:04.656 00:12:04.657 13:30:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:12:04.657 INFO: shutting down applications... 00:12:04.657 13:30:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:12:04.657 13:30:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:12:04.657 13:30:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:12:04.657 13:30:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 72747 ]] 00:12:04.657 13:30:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 72747 00:12:04.657 13:30:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:12:04.657 13:30:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:04.657 13:30:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 72747 00:12:04.657 13:30:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:12:05.221 13:30:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:12:05.221 13:30:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:05.221 13:30:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 72747 00:12:05.221 13:30:18 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:12:05.221 13:30:18 json_config_extra_key -- json_config/common.sh@43 -- # break 00:12:05.221 13:30:18 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:12:05.221 13:30:18 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:12:05.221 SPDK target shutdown done 00:12:05.221 13:30:18 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:12:05.221 Success 00:12:05.221 ************************************ 00:12:05.221 END TEST json_config_extra_key 00:12:05.221 ************************************ 00:12:05.221 00:12:05.221 real 0m1.597s 00:12:05.221 user 0m1.406s 00:12:05.221 sys 0m0.395s 00:12:05.221 13:30:18 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:05.221 13:30:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:12:05.221 13:30:18 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:12:05.221 13:30:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:05.221 13:30:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:05.221 13:30:18 -- common/autotest_common.sh@10 -- # set +x 00:12:05.221 ************************************ 00:12:05.221 START TEST alias_rpc 00:12:05.221 ************************************ 00:12:05.221 13:30:18 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:12:05.221 * Looking for test storage... 00:12:05.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:12:05.221 13:30:18 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:05.221 13:30:18 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=72812 00:12:05.221 13:30:18 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:05.221 13:30:18 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 72812 00:12:05.221 13:30:18 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 72812 ']' 00:12:05.222 13:30:18 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.222 13:30:18 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:05.222 13:30:18 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.222 13:30:18 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:05.222 13:30:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.493 [2024-05-15 13:30:18.377894] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:05.493 [2024-05-15 13:30:18.378273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72812 ] 00:12:05.493 [2024-05-15 13:30:18.507418] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:05.493 [2024-05-15 13:30:18.523802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.493 [2024-05-15 13:30:18.585761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.750 13:30:18 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:05.750 13:30:18 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:05.750 13:30:18 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:12:06.316 13:30:19 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 72812 00:12:06.316 13:30:19 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 72812 ']' 00:12:06.316 13:30:19 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 72812 00:12:06.316 13:30:19 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:12:06.316 13:30:19 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:06.316 13:30:19 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72812 00:12:06.316 killing process with pid 72812 00:12:06.316 13:30:19 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:06.316 13:30:19 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:06.316 13:30:19 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72812' 00:12:06.316 13:30:19 alias_rpc -- common/autotest_common.sh@965 -- # kill 72812 00:12:06.316 13:30:19 alias_rpc -- common/autotest_common.sh@970 -- # wait 72812 00:12:06.573 ************************************ 00:12:06.573 END TEST alias_rpc 00:12:06.573 ************************************ 00:12:06.573 00:12:06.573 real 0m1.315s 00:12:06.573 user 0m1.456s 00:12:06.573 sys 0m0.405s 00:12:06.573 13:30:19 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:06.573 13:30:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.573 13:30:19 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:12:06.573 13:30:19 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:12:06.573 13:30:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:06.573 13:30:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:06.573 13:30:19 -- common/autotest_common.sh@10 -- # set +x 00:12:06.573 ************************************ 00:12:06.573 START TEST spdkcli_tcp 00:12:06.573 ************************************ 00:12:06.573 13:30:19 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:12:06.573 * Looking for test storage... 00:12:06.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:12:06.573 13:30:19 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:12:06.573 13:30:19 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:12:06.573 13:30:19 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:12:06.573 13:30:19 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:12:06.573 13:30:19 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:12:06.829 13:30:19 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:06.829 13:30:19 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:12:06.829 13:30:19 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:06.829 13:30:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:06.829 13:30:19 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=72880 00:12:06.829 13:30:19 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:12:06.829 13:30:19 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 72880 00:12:06.829 13:30:19 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 72880 ']' 00:12:06.829 13:30:19 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.829 13:30:19 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:06.829 13:30:19 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.829 13:30:19 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:06.829 13:30:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:06.829 [2024-05-15 13:30:19.743179] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:06.829 [2024-05-15 13:30:19.743542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72880 ] 00:12:06.829 [2024-05-15 13:30:19.871489] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:06.829 [2024-05-15 13:30:19.890454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:07.086 [2024-05-15 13:30:19.953281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.086 [2024-05-15 13:30:19.953297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.086 13:30:20 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:07.086 13:30:20 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:12:07.086 13:30:20 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=72890 00:12:07.086 13:30:20 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:12:07.086 13:30:20 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:12:07.345 [ 00:12:07.345 "bdev_malloc_delete", 00:12:07.345 "bdev_malloc_create", 00:12:07.345 "bdev_null_resize", 00:12:07.345 "bdev_null_delete", 00:12:07.345 "bdev_null_create", 00:12:07.345 "bdev_nvme_cuse_unregister", 00:12:07.345 "bdev_nvme_cuse_register", 00:12:07.345 "bdev_opal_new_user", 00:12:07.345 "bdev_opal_set_lock_state", 00:12:07.345 "bdev_opal_delete", 00:12:07.345 "bdev_opal_get_info", 00:12:07.345 "bdev_opal_create", 00:12:07.345 "bdev_nvme_opal_revert", 00:12:07.345 "bdev_nvme_opal_init", 00:12:07.345 "bdev_nvme_send_cmd", 00:12:07.345 "bdev_nvme_get_path_iostat", 00:12:07.345 "bdev_nvme_get_mdns_discovery_info", 00:12:07.345 "bdev_nvme_stop_mdns_discovery", 00:12:07.345 "bdev_nvme_start_mdns_discovery", 00:12:07.345 "bdev_nvme_set_multipath_policy", 00:12:07.345 "bdev_nvme_set_preferred_path", 00:12:07.345 "bdev_nvme_get_io_paths", 00:12:07.345 "bdev_nvme_remove_error_injection", 00:12:07.345 "bdev_nvme_add_error_injection", 00:12:07.345 "bdev_nvme_get_discovery_info", 00:12:07.345 "bdev_nvme_stop_discovery", 00:12:07.345 "bdev_nvme_start_discovery", 00:12:07.345 "bdev_nvme_get_controller_health_info", 00:12:07.345 "bdev_nvme_disable_controller", 00:12:07.345 "bdev_nvme_enable_controller", 00:12:07.345 "bdev_nvme_reset_controller", 00:12:07.345 "bdev_nvme_get_transport_statistics", 00:12:07.345 "bdev_nvme_apply_firmware", 00:12:07.345 "bdev_nvme_detach_controller", 00:12:07.345 "bdev_nvme_get_controllers", 00:12:07.345 "bdev_nvme_attach_controller", 00:12:07.345 "bdev_nvme_set_hotplug", 00:12:07.345 "bdev_nvme_set_options", 00:12:07.345 "bdev_passthru_delete", 00:12:07.345 "bdev_passthru_create", 00:12:07.345 "bdev_lvol_check_shallow_copy", 00:12:07.345 "bdev_lvol_start_shallow_copy", 00:12:07.345 "bdev_lvol_grow_lvstore", 00:12:07.345 "bdev_lvol_get_lvols", 00:12:07.345 "bdev_lvol_get_lvstores", 00:12:07.345 "bdev_lvol_delete", 00:12:07.345 "bdev_lvol_set_read_only", 00:12:07.345 "bdev_lvol_resize", 00:12:07.345 "bdev_lvol_decouple_parent", 00:12:07.345 "bdev_lvol_inflate", 00:12:07.345 "bdev_lvol_rename", 00:12:07.345 "bdev_lvol_clone_bdev", 00:12:07.345 "bdev_lvol_clone", 00:12:07.346 "bdev_lvol_snapshot", 00:12:07.346 "bdev_lvol_create", 00:12:07.346 "bdev_lvol_delete_lvstore", 00:12:07.346 "bdev_lvol_rename_lvstore", 00:12:07.346 "bdev_lvol_create_lvstore", 00:12:07.346 "bdev_raid_set_options", 00:12:07.346 "bdev_raid_remove_base_bdev", 00:12:07.346 "bdev_raid_add_base_bdev", 00:12:07.346 "bdev_raid_delete", 00:12:07.346 "bdev_raid_create", 00:12:07.346 "bdev_raid_get_bdevs", 00:12:07.346 "bdev_error_inject_error", 00:12:07.346 "bdev_error_delete", 00:12:07.346 "bdev_error_create", 00:12:07.346 "bdev_split_delete", 00:12:07.346 "bdev_split_create", 00:12:07.346 "bdev_delay_delete", 00:12:07.346 "bdev_delay_create", 00:12:07.346 "bdev_delay_update_latency", 00:12:07.346 "bdev_zone_block_delete", 00:12:07.346 "bdev_zone_block_create", 00:12:07.346 "blobfs_create", 00:12:07.346 "blobfs_detect", 00:12:07.346 "blobfs_set_cache_size", 00:12:07.346 "bdev_aio_delete", 00:12:07.346 "bdev_aio_rescan", 00:12:07.346 "bdev_aio_create", 00:12:07.346 "bdev_ftl_set_property", 00:12:07.346 "bdev_ftl_get_properties", 00:12:07.346 "bdev_ftl_get_stats", 00:12:07.346 "bdev_ftl_unmap", 00:12:07.346 "bdev_ftl_unload", 00:12:07.346 "bdev_ftl_delete", 00:12:07.346 "bdev_ftl_load", 00:12:07.346 "bdev_ftl_create", 00:12:07.346 "bdev_virtio_attach_controller", 00:12:07.346 "bdev_virtio_scsi_get_devices", 00:12:07.346 "bdev_virtio_detach_controller", 00:12:07.346 "bdev_virtio_blk_set_hotplug", 00:12:07.346 "bdev_iscsi_delete", 00:12:07.346 "bdev_iscsi_create", 00:12:07.346 "bdev_iscsi_set_options", 00:12:07.346 "bdev_uring_delete", 00:12:07.346 "bdev_uring_rescan", 00:12:07.346 "bdev_uring_create", 00:12:07.346 "accel_error_inject_error", 00:12:07.346 "ioat_scan_accel_module", 00:12:07.346 "dsa_scan_accel_module", 00:12:07.346 "iaa_scan_accel_module", 00:12:07.346 "keyring_file_remove_key", 00:12:07.346 "keyring_file_add_key", 00:12:07.346 "iscsi_get_histogram", 00:12:07.346 "iscsi_enable_histogram", 00:12:07.346 "iscsi_set_options", 00:12:07.346 "iscsi_get_auth_groups", 00:12:07.346 "iscsi_auth_group_remove_secret", 00:12:07.346 "iscsi_auth_group_add_secret", 00:12:07.346 "iscsi_delete_auth_group", 00:12:07.346 "iscsi_create_auth_group", 00:12:07.346 "iscsi_set_discovery_auth", 00:12:07.346 "iscsi_get_options", 00:12:07.346 "iscsi_target_node_request_logout", 00:12:07.346 "iscsi_target_node_set_redirect", 00:12:07.346 "iscsi_target_node_set_auth", 00:12:07.346 "iscsi_target_node_add_lun", 00:12:07.346 "iscsi_get_stats", 00:12:07.346 "iscsi_get_connections", 00:12:07.346 "iscsi_portal_group_set_auth", 00:12:07.346 "iscsi_start_portal_group", 00:12:07.346 "iscsi_delete_portal_group", 00:12:07.346 "iscsi_create_portal_group", 00:12:07.346 "iscsi_get_portal_groups", 00:12:07.346 "iscsi_delete_target_node", 00:12:07.346 "iscsi_target_node_remove_pg_ig_maps", 00:12:07.346 "iscsi_target_node_add_pg_ig_maps", 00:12:07.346 "iscsi_create_target_node", 00:12:07.346 "iscsi_get_target_nodes", 00:12:07.346 "iscsi_delete_initiator_group", 00:12:07.346 "iscsi_initiator_group_remove_initiators", 00:12:07.346 "iscsi_initiator_group_add_initiators", 00:12:07.346 "iscsi_create_initiator_group", 00:12:07.346 "iscsi_get_initiator_groups", 00:12:07.346 "nvmf_set_crdt", 00:12:07.346 "nvmf_set_config", 00:12:07.346 "nvmf_set_max_subsystems", 00:12:07.346 "nvmf_stop_mdns_prr", 00:12:07.346 "nvmf_publish_mdns_prr", 00:12:07.346 "nvmf_subsystem_get_listeners", 00:12:07.346 "nvmf_subsystem_get_qpairs", 00:12:07.346 "nvmf_subsystem_get_controllers", 00:12:07.346 "nvmf_get_stats", 00:12:07.346 "nvmf_get_transports", 00:12:07.346 "nvmf_create_transport", 00:12:07.346 "nvmf_get_targets", 00:12:07.346 "nvmf_delete_target", 00:12:07.346 "nvmf_create_target", 00:12:07.346 "nvmf_subsystem_allow_any_host", 00:12:07.346 "nvmf_subsystem_remove_host", 00:12:07.346 "nvmf_subsystem_add_host", 00:12:07.346 "nvmf_ns_remove_host", 00:12:07.346 "nvmf_ns_add_host", 00:12:07.346 "nvmf_subsystem_remove_ns", 00:12:07.346 "nvmf_subsystem_add_ns", 00:12:07.346 "nvmf_subsystem_listener_set_ana_state", 00:12:07.346 "nvmf_discovery_get_referrals", 00:12:07.346 "nvmf_discovery_remove_referral", 00:12:07.346 "nvmf_discovery_add_referral", 00:12:07.346 "nvmf_subsystem_remove_listener", 00:12:07.346 "nvmf_subsystem_add_listener", 00:12:07.346 "nvmf_delete_subsystem", 00:12:07.346 "nvmf_create_subsystem", 00:12:07.346 "nvmf_get_subsystems", 00:12:07.346 "env_dpdk_get_mem_stats", 00:12:07.346 "nbd_get_disks", 00:12:07.346 "nbd_stop_disk", 00:12:07.346 "nbd_start_disk", 00:12:07.346 "ublk_recover_disk", 00:12:07.346 "ublk_get_disks", 00:12:07.346 "ublk_stop_disk", 00:12:07.346 "ublk_start_disk", 00:12:07.346 "ublk_destroy_target", 00:12:07.346 "ublk_create_target", 00:12:07.346 "virtio_blk_create_transport", 00:12:07.346 "virtio_blk_get_transports", 00:12:07.346 "vhost_controller_set_coalescing", 00:12:07.346 "vhost_get_controllers", 00:12:07.346 "vhost_delete_controller", 00:12:07.346 "vhost_create_blk_controller", 00:12:07.346 "vhost_scsi_controller_remove_target", 00:12:07.346 "vhost_scsi_controller_add_target", 00:12:07.346 "vhost_start_scsi_controller", 00:12:07.346 "vhost_create_scsi_controller", 00:12:07.346 "thread_set_cpumask", 00:12:07.346 "framework_get_scheduler", 00:12:07.346 "framework_set_scheduler", 00:12:07.346 "framework_get_reactors", 00:12:07.346 "thread_get_io_channels", 00:12:07.346 "thread_get_pollers", 00:12:07.346 "thread_get_stats", 00:12:07.346 "framework_monitor_context_switch", 00:12:07.346 "spdk_kill_instance", 00:12:07.346 "log_enable_timestamps", 00:12:07.346 "log_get_flags", 00:12:07.346 "log_clear_flag", 00:12:07.346 "log_set_flag", 00:12:07.346 "log_get_level", 00:12:07.346 "log_set_level", 00:12:07.346 "log_get_print_level", 00:12:07.346 "log_set_print_level", 00:12:07.346 "framework_enable_cpumask_locks", 00:12:07.346 "framework_disable_cpumask_locks", 00:12:07.346 "framework_wait_init", 00:12:07.346 "framework_start_init", 00:12:07.346 "scsi_get_devices", 00:12:07.346 "bdev_get_histogram", 00:12:07.346 "bdev_enable_histogram", 00:12:07.346 "bdev_set_qos_limit", 00:12:07.346 "bdev_set_qd_sampling_period", 00:12:07.346 "bdev_get_bdevs", 00:12:07.346 "bdev_reset_iostat", 00:12:07.346 "bdev_get_iostat", 00:12:07.346 "bdev_examine", 00:12:07.346 "bdev_wait_for_examine", 00:12:07.346 "bdev_set_options", 00:12:07.346 "notify_get_notifications", 00:12:07.346 "notify_get_types", 00:12:07.346 "accel_get_stats", 00:12:07.346 "accel_set_options", 00:12:07.346 "accel_set_driver", 00:12:07.346 "accel_crypto_key_destroy", 00:12:07.346 "accel_crypto_keys_get", 00:12:07.346 "accel_crypto_key_create", 00:12:07.346 "accel_assign_opc", 00:12:07.346 "accel_get_module_info", 00:12:07.346 "accel_get_opc_assignments", 00:12:07.346 "vmd_rescan", 00:12:07.346 "vmd_remove_device", 00:12:07.346 "vmd_enable", 00:12:07.346 "sock_get_default_impl", 00:12:07.346 "sock_set_default_impl", 00:12:07.346 "sock_impl_set_options", 00:12:07.346 "sock_impl_get_options", 00:12:07.346 "iobuf_get_stats", 00:12:07.346 "iobuf_set_options", 00:12:07.346 "framework_get_pci_devices", 00:12:07.346 "framework_get_config", 00:12:07.346 "framework_get_subsystems", 00:12:07.346 "trace_get_info", 00:12:07.346 "trace_get_tpoint_group_mask", 00:12:07.346 "trace_disable_tpoint_group", 00:12:07.346 "trace_enable_tpoint_group", 00:12:07.346 "trace_clear_tpoint_mask", 00:12:07.346 "trace_set_tpoint_mask", 00:12:07.346 "keyring_get_keys", 00:12:07.346 "spdk_get_version", 00:12:07.346 "rpc_get_methods" 00:12:07.346 ] 00:12:07.603 13:30:20 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:12:07.603 13:30:20 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:07.603 13:30:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:07.603 13:30:20 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:07.603 13:30:20 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 72880 00:12:07.603 13:30:20 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 72880 ']' 00:12:07.603 13:30:20 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 72880 00:12:07.603 13:30:20 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:12:07.603 13:30:20 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:07.603 13:30:20 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72880 00:12:07.603 killing process with pid 72880 00:12:07.603 13:30:20 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:07.603 13:30:20 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:07.603 13:30:20 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72880' 00:12:07.603 13:30:20 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 72880 00:12:07.603 13:30:20 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 72880 00:12:07.859 ************************************ 00:12:07.859 END TEST spdkcli_tcp 00:12:07.859 ************************************ 00:12:07.859 00:12:07.859 real 0m1.269s 00:12:07.859 user 0m2.238s 00:12:07.859 sys 0m0.429s 00:12:07.859 13:30:20 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:07.859 13:30:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:07.859 13:30:20 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:12:07.859 13:30:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:07.859 13:30:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:07.859 13:30:20 -- common/autotest_common.sh@10 -- # set +x 00:12:07.860 ************************************ 00:12:07.860 START TEST dpdk_mem_utility 00:12:07.860 ************************************ 00:12:07.860 13:30:20 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:12:08.117 * Looking for test storage... 00:12:08.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:12:08.117 13:30:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:12:08.117 13:30:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=72958 00:12:08.117 13:30:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:08.117 13:30:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 72958 00:12:08.117 13:30:21 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 72958 ']' 00:12:08.118 13:30:21 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.118 13:30:21 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:08.118 13:30:21 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.118 13:30:21 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:08.118 13:30:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:08.118 [2024-05-15 13:30:21.061342] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:08.118 [2024-05-15 13:30:21.061617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72958 ] 00:12:08.118 [2024-05-15 13:30:21.181162] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:08.118 [2024-05-15 13:30:21.199569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.375 [2024-05-15 13:30:21.284049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.635 13:30:21 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:08.635 13:30:21 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:12:08.635 13:30:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:12:08.635 13:30:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:12:08.635 13:30:21 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.635 13:30:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:08.635 { 00:12:08.635 "filename": "/tmp/spdk_mem_dump.txt" 00:12:08.635 } 00:12:08.635 13:30:21 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.635 13:30:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:12:08.635 DPDK memory size 814.000000 MiB in 1 heap(s) 00:12:08.635 1 heaps totaling size 814.000000 MiB 00:12:08.635 size: 814.000000 MiB heap id: 0 00:12:08.635 end heaps---------- 00:12:08.635 8 mempools totaling size 598.116089 MiB 00:12:08.635 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:12:08.635 size: 158.602051 MiB name: PDU_data_out_Pool 00:12:08.635 size: 84.521057 MiB name: bdev_io_72958 00:12:08.635 size: 51.011292 MiB name: evtpool_72958 00:12:08.635 size: 50.003479 MiB name: msgpool_72958 00:12:08.635 size: 21.763794 MiB name: PDU_Pool 00:12:08.635 size: 19.513306 MiB name: SCSI_TASK_Pool 00:12:08.635 size: 0.026123 MiB name: Session_Pool 00:12:08.635 end mempools------- 00:12:08.635 6 memzones totaling size 4.142822 MiB 00:12:08.635 size: 1.000366 MiB name: RG_ring_0_72958 00:12:08.635 size: 1.000366 MiB name: RG_ring_1_72958 00:12:08.635 size: 1.000366 MiB name: RG_ring_4_72958 00:12:08.635 size: 1.000366 MiB name: RG_ring_5_72958 00:12:08.635 size: 0.125366 MiB name: RG_ring_2_72958 00:12:08.635 size: 0.015991 MiB name: RG_ring_3_72958 00:12:08.635 end memzones------- 00:12:08.635 13:30:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:12:08.635 heap id: 0 total size: 814.000000 MiB number of busy elements: 241 number of free elements: 15 00:12:08.635 list of free elements. size: 12.482727 MiB 00:12:08.635 element at address: 0x200000400000 with size: 1.999512 MiB 00:12:08.635 element at address: 0x200018e00000 with size: 0.999878 MiB 00:12:08.635 element at address: 0x200019000000 with size: 0.999878 MiB 00:12:08.635 element at address: 0x200003e00000 with size: 0.996277 MiB 00:12:08.635 element at address: 0x200031c00000 with size: 0.994446 MiB 00:12:08.635 element at address: 0x200013800000 with size: 0.978699 MiB 00:12:08.635 element at address: 0x200007000000 with size: 0.959839 MiB 00:12:08.635 element at address: 0x200019200000 with size: 0.936584 MiB 00:12:08.635 element at address: 0x200000200000 with size: 0.836670 MiB 00:12:08.635 element at address: 0x20001aa00000 with size: 0.566956 MiB 00:12:08.635 element at address: 0x20000b200000 with size: 0.488892 MiB 00:12:08.635 element at address: 0x200000800000 with size: 0.486511 MiB 00:12:08.635 element at address: 0x200019400000 with size: 0.485657 MiB 00:12:08.635 element at address: 0x200027e00000 with size: 0.401611 MiB 00:12:08.635 element at address: 0x200003a00000 with size: 0.351318 MiB 00:12:08.635 list of standard malloc elements. size: 199.254700 MiB 00:12:08.635 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:12:08.635 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:12:08.635 element at address: 0x200018efff80 with size: 1.000122 MiB 00:12:08.635 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:12:08.635 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:12:08.635 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:12:08.635 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:12:08.635 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:12:08.635 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:12:08.635 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:12:08.635 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:12:08.636 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:12:08.636 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:12:08.636 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:12:08.636 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:12:08.636 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:12:08.636 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20000087c980 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003adb300 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003adb500 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003affa80 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003affb40 with size: 0.000183 MiB 00:12:08.636 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:12:08.636 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:12:08.636 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:12:08.636 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa91240 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa91300 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa913c0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa91480 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa91540 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:12:08.636 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:12:08.637 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:12:08.637 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:12:08.637 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:12:08.637 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:12:08.637 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:12:08.637 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:12:08.637 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:12:08.637 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:12:08.637 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e66d00 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e66dc0 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6d9c0 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:12:08.637 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:12:08.637 list of memzone associated elements. size: 602.262573 MiB 00:12:08.637 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:12:08.637 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:12:08.637 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:12:08.637 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:12:08.637 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:12:08.637 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_72958_0 00:12:08.637 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:12:08.637 associated memzone info: size: 48.002930 MiB name: MP_evtpool_72958_0 00:12:08.637 element at address: 0x200003fff380 with size: 48.003052 MiB 00:12:08.637 associated memzone info: size: 48.002930 MiB name: MP_msgpool_72958_0 00:12:08.637 element at address: 0x2000195be940 with size: 20.255554 MiB 00:12:08.637 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:12:08.637 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:12:08.637 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:12:08.637 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:12:08.637 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_72958 00:12:08.637 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:12:08.637 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_72958 00:12:08.637 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:12:08.637 associated memzone info: size: 1.007996 MiB name: MP_evtpool_72958 00:12:08.637 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:12:08.637 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:12:08.637 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:12:08.637 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:12:08.637 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:12:08.637 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:12:08.637 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:12:08.637 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:12:08.637 element at address: 0x200003eff180 with size: 1.000488 MiB 00:12:08.637 associated memzone info: size: 1.000366 MiB name: RG_ring_0_72958 00:12:08.637 element at address: 0x200003affc00 with size: 1.000488 MiB 00:12:08.637 associated memzone info: size: 1.000366 MiB name: RG_ring_1_72958 00:12:08.637 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:12:08.637 associated memzone info: size: 1.000366 MiB name: RG_ring_4_72958 00:12:08.637 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:12:08.637 associated memzone info: size: 1.000366 MiB name: RG_ring_5_72958 00:12:08.637 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:12:08.637 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_72958 00:12:08.637 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:12:08.637 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:12:08.637 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:12:08.637 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:12:08.637 element at address: 0x20001947c540 with size: 0.250488 MiB 00:12:08.637 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:12:08.637 element at address: 0x200003adf880 with size: 0.125488 MiB 00:12:08.637 associated memzone info: size: 0.125366 MiB name: RG_ring_2_72958 00:12:08.637 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:12:08.637 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:12:08.637 element at address: 0x200027e66e80 with size: 0.023743 MiB 00:12:08.637 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:12:08.637 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:12:08.637 associated memzone info: size: 0.015991 MiB name: RG_ring_3_72958 00:12:08.637 element at address: 0x200027e6cfc0 with size: 0.002441 MiB 00:12:08.637 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:12:08.637 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:12:08.637 associated memzone info: size: 0.000183 MiB name: MP_msgpool_72958 00:12:08.637 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:12:08.637 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_72958 00:12:08.637 element at address: 0x200027e6da80 with size: 0.000305 MiB 00:12:08.637 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:12:08.637 13:30:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:12:08.637 13:30:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 72958 00:12:08.637 13:30:21 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 72958 ']' 00:12:08.637 13:30:21 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 72958 00:12:08.637 13:30:21 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:12:08.637 13:30:21 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:08.637 13:30:21 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72958 00:12:08.637 13:30:21 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:08.637 13:30:21 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:08.637 13:30:21 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72958' 00:12:08.637 killing process with pid 72958 00:12:08.637 13:30:21 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 72958 00:12:08.637 13:30:21 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 72958 00:12:09.207 00:12:09.207 real 0m1.125s 00:12:09.207 user 0m1.096s 00:12:09.207 sys 0m0.429s 00:12:09.207 ************************************ 00:12:09.207 END TEST dpdk_mem_utility 00:12:09.207 ************************************ 00:12:09.207 13:30:22 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:09.207 13:30:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:09.207 13:30:22 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:12:09.207 13:30:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:09.207 13:30:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:09.207 13:30:22 -- common/autotest_common.sh@10 -- # set +x 00:12:09.207 ************************************ 00:12:09.207 START TEST event 00:12:09.207 ************************************ 00:12:09.207 13:30:22 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:12:09.207 * Looking for test storage... 00:12:09.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:12:09.207 13:30:22 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:09.207 13:30:22 event -- bdev/nbd_common.sh@6 -- # set -e 00:12:09.207 13:30:22 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:12:09.207 13:30:22 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:12:09.207 13:30:22 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:09.207 13:30:22 event -- common/autotest_common.sh@10 -- # set +x 00:12:09.207 ************************************ 00:12:09.207 START TEST event_perf 00:12:09.207 ************************************ 00:12:09.207 13:30:22 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:12:09.207 Running I/O for 1 seconds...[2024-05-15 13:30:22.211710] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:09.207 [2024-05-15 13:30:22.211925] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73028 ] 00:12:09.465 [2024-05-15 13:30:22.345228] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:09.465 [2024-05-15 13:30:22.363074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:09.465 [2024-05-15 13:30:22.419428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.465 [2024-05-15 13:30:22.419581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.465 [2024-05-15 13:30:22.419645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:09.465 [2024-05-15 13:30:22.419647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.397 Running I/O for 1 seconds... 00:12:10.397 lcore 0: 163352 00:12:10.397 lcore 1: 163352 00:12:10.397 lcore 2: 163354 00:12:10.397 lcore 3: 163351 00:12:10.397 done. 00:12:10.655 00:12:10.655 real 0m1.304s 00:12:10.655 user 0m4.094s 00:12:10.655 sys 0m0.064s 00:12:10.655 13:30:23 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:10.655 ************************************ 00:12:10.655 END TEST event_perf 00:12:10.655 ************************************ 00:12:10.655 13:30:23 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:12:10.655 13:30:23 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:12:10.655 13:30:23 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:10.655 13:30:23 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:10.655 13:30:23 event -- common/autotest_common.sh@10 -- # set +x 00:12:10.655 ************************************ 00:12:10.655 START TEST event_reactor 00:12:10.655 ************************************ 00:12:10.655 13:30:23 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:12:10.655 [2024-05-15 13:30:23.566478] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:10.655 [2024-05-15 13:30:23.566785] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73061 ] 00:12:10.655 [2024-05-15 13:30:23.687139] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:10.655 [2024-05-15 13:30:23.705193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.912 [2024-05-15 13:30:23.762019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.922 test_start 00:12:11.922 oneshot 00:12:11.922 tick 100 00:12:11.922 tick 100 00:12:11.922 tick 250 00:12:11.922 tick 100 00:12:11.922 tick 100 00:12:11.922 tick 250 00:12:11.922 tick 100 00:12:11.922 tick 500 00:12:11.922 tick 100 00:12:11.922 tick 100 00:12:11.922 tick 250 00:12:11.922 tick 100 00:12:11.922 tick 100 00:12:11.922 test_end 00:12:11.922 ************************************ 00:12:11.922 END TEST event_reactor 00:12:11.922 ************************************ 00:12:11.922 00:12:11.922 real 0m1.285s 00:12:11.922 user 0m1.126s 00:12:11.922 sys 0m0.049s 00:12:11.922 13:30:24 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:11.922 13:30:24 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:12:11.922 13:30:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:12:11.922 13:30:24 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:11.922 13:30:24 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:11.922 13:30:24 event -- common/autotest_common.sh@10 -- # set +x 00:12:11.922 ************************************ 00:12:11.923 START TEST event_reactor_perf 00:12:11.923 ************************************ 00:12:11.923 13:30:24 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:12:11.923 [2024-05-15 13:30:24.911933] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:11.923 [2024-05-15 13:30:24.912296] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73096 ] 00:12:12.180 [2024-05-15 13:30:25.037571] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:12.180 [2024-05-15 13:30:25.053448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.180 [2024-05-15 13:30:25.107045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.110 test_start 00:12:13.110 test_end 00:12:13.110 Performance: 395564 events per second 00:12:13.110 ************************************ 00:12:13.110 END TEST event_reactor_perf 00:12:13.110 ************************************ 00:12:13.110 00:12:13.110 real 0m1.285s 00:12:13.110 user 0m1.126s 00:12:13.110 sys 0m0.050s 00:12:13.110 13:30:26 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:13.110 13:30:26 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:12:13.366 13:30:26 event -- event/event.sh@49 -- # uname -s 00:12:13.366 13:30:26 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:12:13.366 13:30:26 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:12:13.366 13:30:26 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:13.366 13:30:26 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:13.366 13:30:26 event -- common/autotest_common.sh@10 -- # set +x 00:12:13.366 ************************************ 00:12:13.366 START TEST event_scheduler 00:12:13.366 ************************************ 00:12:13.366 13:30:26 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:12:13.366 * Looking for test storage... 00:12:13.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:12:13.366 13:30:26 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:12:13.366 13:30:26 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=73158 00:12:13.366 13:30:26 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:12:13.366 13:30:26 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:12:13.367 13:30:26 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 73158 00:12:13.367 13:30:26 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 73158 ']' 00:12:13.367 13:30:26 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.367 13:30:26 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:13.367 13:30:26 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.367 13:30:26 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:13.367 13:30:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:13.367 [2024-05-15 13:30:26.400475] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:13.367 [2024-05-15 13:30:26.400838] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73158 ] 00:12:13.624 [2024-05-15 13:30:26.542234] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:13.624 [2024-05-15 13:30:26.562439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:13.624 [2024-05-15 13:30:26.628045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.624 [2024-05-15 13:30:26.628195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.624 [2024-05-15 13:30:26.628358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.624 [2024-05-15 13:30:26.628352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.560 13:30:27 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:14.560 13:30:27 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:12:14.560 13:30:27 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:12:14.560 13:30:27 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.560 13:30:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:14.560 POWER: Env isn't set yet! 00:12:14.560 POWER: Attempting to initialise ACPI cpufreq power management... 00:12:14.560 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:14.560 POWER: Cannot set governor of lcore 0 to userspace 00:12:14.560 POWER: Attempting to initialise PSTAT power management... 00:12:14.560 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:14.560 POWER: Cannot set governor of lcore 0 to performance 00:12:14.560 POWER: Attempting to initialise AMD PSTATE power management... 00:12:14.560 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:14.560 POWER: Cannot set governor of lcore 0 to userspace 00:12:14.560 POWER: Attempting to initialise CPPC power management... 00:12:14.560 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:14.560 POWER: Cannot set governor of lcore 0 to userspace 00:12:14.560 POWER: Attempting to initialise VM power management... 00:12:14.560 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:12:14.560 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:12:14.560 POWER: Unable to set Power Management Environment for lcore 0 00:12:14.560 [2024-05-15 13:30:27.414875] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:12:14.561 [2024-05-15 13:30:27.414969] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:12:14.561 [2024-05-15 13:30:27.415008] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:12:14.561 13:30:27 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.561 13:30:27 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:12:14.561 13:30:27 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.561 13:30:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:14.561 [2024-05-15 13:30:27.488326] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:12:14.561 13:30:27 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.561 13:30:27 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:12:14.561 13:30:27 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:14.561 13:30:27 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:14.561 13:30:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:14.561 ************************************ 00:12:14.561 START TEST scheduler_create_thread 00:12:14.561 ************************************ 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:14.561 2 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:14.561 3 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:14.561 4 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:14.561 5 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:14.561 6 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:14.561 7 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:14.561 8 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:14.561 9 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:14.561 10 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.561 13:30:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:15.127 13:30:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.127 13:30:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:12:15.127 13:30:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.127 13:30:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:16.501 13:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.501 13:30:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:12:16.501 13:30:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:12:16.501 13:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.501 13:30:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:17.876 ************************************ 00:12:17.876 END TEST scheduler_create_thread 00:12:17.876 ************************************ 00:12:17.876 13:30:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.876 00:12:17.876 real 0m3.095s 00:12:17.876 user 0m0.018s 00:12:17.876 sys 0m0.011s 00:12:17.876 13:30:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:17.876 13:30:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:17.876 13:30:30 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:12:17.876 13:30:30 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 73158 00:12:17.876 13:30:30 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 73158 ']' 00:12:17.876 13:30:30 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 73158 00:12:17.876 13:30:30 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:12:17.876 13:30:30 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:17.876 13:30:30 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73158 00:12:17.876 13:30:30 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:12:17.876 13:30:30 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:12:17.876 13:30:30 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73158' 00:12:17.876 killing process with pid 73158 00:12:17.876 13:30:30 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 73158 00:12:17.876 13:30:30 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 73158 00:12:18.134 [2024-05-15 13:30:30.976395] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:12:18.134 00:12:18.134 real 0m4.960s 00:12:18.134 user 0m9.754s 00:12:18.134 sys 0m0.397s 00:12:18.134 13:30:31 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:18.134 ************************************ 00:12:18.134 END TEST event_scheduler 00:12:18.134 ************************************ 00:12:18.134 13:30:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:18.392 13:30:31 event -- event/event.sh@51 -- # modprobe -n nbd 00:12:18.392 13:30:31 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:12:18.392 13:30:31 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:18.392 13:30:31 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:18.392 13:30:31 event -- common/autotest_common.sh@10 -- # set +x 00:12:18.392 ************************************ 00:12:18.392 START TEST app_repeat 00:12:18.392 ************************************ 00:12:18.392 13:30:31 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:12:18.392 13:30:31 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:18.392 13:30:31 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:18.392 13:30:31 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:12:18.392 13:30:31 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:18.392 13:30:31 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:12:18.392 13:30:31 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:12:18.392 13:30:31 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:12:18.392 13:30:31 event.app_repeat -- event/event.sh@19 -- # repeat_pid=73259 00:12:18.392 13:30:31 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:12:18.392 13:30:31 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:12:18.392 13:30:31 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 73259' 00:12:18.392 Process app_repeat pid: 73259 00:12:18.392 13:30:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:18.392 spdk_app_start Round 0 00:12:18.392 13:30:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:12:18.392 13:30:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 73259 /var/tmp/spdk-nbd.sock 00:12:18.392 13:30:31 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 73259 ']' 00:12:18.392 13:30:31 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:18.392 13:30:31 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:18.392 13:30:31 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:18.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:18.392 13:30:31 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:18.392 13:30:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:18.392 [2024-05-15 13:30:31.300025] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:18.392 [2024-05-15 13:30:31.300942] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73259 ] 00:12:18.392 [2024-05-15 13:30:31.430115] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:18.392 [2024-05-15 13:30:31.446311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:18.673 [2024-05-15 13:30:31.501476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.673 [2024-05-15 13:30:31.501483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.673 13:30:31 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:18.673 13:30:31 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:12:18.673 13:30:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:18.934 Malloc0 00:12:18.934 13:30:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:19.192 Malloc1 00:12:19.192 13:30:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:19.192 13:30:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:19.192 13:30:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:19.192 13:30:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:19.192 13:30:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:19.192 13:30:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:19.192 13:30:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:19.192 13:30:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:19.192 13:30:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:19.192 13:30:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:19.192 13:30:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:19.192 13:30:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:19.192 13:30:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:19.192 13:30:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:19.192 13:30:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:19.192 13:30:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:19.449 /dev/nbd0 00:12:19.449 13:30:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:19.449 13:30:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:19.449 13:30:32 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:12:19.449 13:30:32 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:12:19.449 13:30:32 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:19.449 13:30:32 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:19.449 13:30:32 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:12:19.449 13:30:32 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:12:19.449 13:30:32 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:19.449 13:30:32 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:19.449 13:30:32 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:19.449 1+0 records in 00:12:19.449 1+0 records out 00:12:19.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429982 s, 9.5 MB/s 00:12:19.449 13:30:32 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:19.449 13:30:32 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:12:19.449 13:30:32 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:19.449 13:30:32 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:19.449 13:30:32 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:12:19.449 13:30:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.449 13:30:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:19.449 13:30:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:19.707 /dev/nbd1 00:12:19.965 13:30:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:19.965 13:30:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:19.965 13:30:32 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:12:19.965 13:30:32 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:12:19.965 13:30:32 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:19.965 13:30:32 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:19.965 13:30:32 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:12:19.965 13:30:32 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:12:19.965 13:30:32 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:19.965 13:30:32 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:19.965 13:30:32 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:19.965 1+0 records in 00:12:19.965 1+0 records out 00:12:19.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642181 s, 6.4 MB/s 00:12:19.965 13:30:32 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:19.965 13:30:32 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:12:19.965 13:30:32 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:19.965 13:30:32 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:19.965 13:30:32 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:12:19.965 13:30:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:19.965 13:30:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:19.965 13:30:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:19.965 13:30:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:19.965 13:30:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:20.222 { 00:12:20.222 "nbd_device": "/dev/nbd0", 00:12:20.222 "bdev_name": "Malloc0" 00:12:20.222 }, 00:12:20.222 { 00:12:20.222 "nbd_device": "/dev/nbd1", 00:12:20.222 "bdev_name": "Malloc1" 00:12:20.222 } 00:12:20.222 ]' 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:20.222 { 00:12:20.222 "nbd_device": "/dev/nbd0", 00:12:20.222 "bdev_name": "Malloc0" 00:12:20.222 }, 00:12:20.222 { 00:12:20.222 "nbd_device": "/dev/nbd1", 00:12:20.222 "bdev_name": "Malloc1" 00:12:20.222 } 00:12:20.222 ]' 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:20.222 /dev/nbd1' 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:20.222 /dev/nbd1' 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:20.222 256+0 records in 00:12:20.222 256+0 records out 00:12:20.222 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00698112 s, 150 MB/s 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:20.222 256+0 records in 00:12:20.222 256+0 records out 00:12:20.222 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025344 s, 41.4 MB/s 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:20.222 256+0 records in 00:12:20.222 256+0 records out 00:12:20.222 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.031342 s, 33.5 MB/s 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:20.222 13:30:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:20.480 13:30:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:20.480 13:30:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:20.480 13:30:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:20.480 13:30:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:20.480 13:30:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:20.480 13:30:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:20.480 13:30:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:20.480 13:30:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:20.480 13:30:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:20.480 13:30:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:20.739 13:30:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:20.739 13:30:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:20.739 13:30:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:20.739 13:30:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:20.739 13:30:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:20.739 13:30:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:20.739 13:30:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:20.739 13:30:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:20.739 13:30:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:20.739 13:30:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:20.739 13:30:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:21.305 13:30:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:21.305 13:30:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:21.305 13:30:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:21.305 13:30:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:21.305 13:30:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:21.305 13:30:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:21.305 13:30:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:21.305 13:30:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:21.305 13:30:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:21.305 13:30:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:21.305 13:30:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:21.305 13:30:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:21.305 13:30:34 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:21.562 13:30:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:21.562 [2024-05-15 13:30:34.652965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:21.819 [2024-05-15 13:30:34.712564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.819 [2024-05-15 13:30:34.712569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.819 [2024-05-15 13:30:34.757964] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:21.819 [2024-05-15 13:30:34.758266] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:25.100 spdk_app_start Round 1 00:12:25.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:25.100 13:30:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:25.100 13:30:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:12:25.100 13:30:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 73259 /var/tmp/spdk-nbd.sock 00:12:25.100 13:30:37 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 73259 ']' 00:12:25.100 13:30:37 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:25.100 13:30:37 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:25.100 13:30:37 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:25.100 13:30:37 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:25.100 13:30:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:25.100 13:30:37 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:25.100 13:30:37 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:12:25.100 13:30:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:25.100 Malloc0 00:12:25.100 13:30:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:25.358 Malloc1 00:12:25.358 13:30:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:25.358 13:30:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:25.358 13:30:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:25.358 13:30:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:25.358 13:30:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:25.358 13:30:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:25.358 13:30:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:25.358 13:30:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:25.358 13:30:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:25.358 13:30:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:25.358 13:30:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:25.358 13:30:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:25.358 13:30:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:25.358 13:30:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:25.358 13:30:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:25.358 13:30:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:25.616 /dev/nbd0 00:12:25.616 13:30:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:25.616 13:30:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:25.616 13:30:38 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:12:25.616 13:30:38 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:12:25.616 13:30:38 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:25.616 13:30:38 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:25.616 13:30:38 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:12:25.616 13:30:38 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:12:25.616 13:30:38 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:25.616 13:30:38 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:25.616 13:30:38 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:25.616 1+0 records in 00:12:25.616 1+0 records out 00:12:25.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000703627 s, 5.8 MB/s 00:12:25.616 13:30:38 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:25.616 13:30:38 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:12:25.616 13:30:38 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:25.616 13:30:38 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:25.616 13:30:38 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:12:25.616 13:30:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.616 13:30:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:25.616 13:30:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:25.881 /dev/nbd1 00:12:25.881 13:30:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:25.881 13:30:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:25.881 13:30:38 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:12:25.881 13:30:38 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:12:25.881 13:30:38 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:25.881 13:30:38 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:25.881 13:30:38 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:12:25.881 13:30:38 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:12:25.881 13:30:38 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:25.881 13:30:38 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:25.881 13:30:38 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:25.881 1+0 records in 00:12:25.881 1+0 records out 00:12:25.881 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326825 s, 12.5 MB/s 00:12:25.881 13:30:38 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:25.881 13:30:38 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:12:25.881 13:30:38 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:25.881 13:30:38 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:25.881 13:30:38 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:12:25.881 13:30:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.881 13:30:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:25.881 13:30:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:25.881 13:30:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:25.881 13:30:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:26.139 13:30:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:26.139 { 00:12:26.139 "nbd_device": "/dev/nbd0", 00:12:26.139 "bdev_name": "Malloc0" 00:12:26.139 }, 00:12:26.139 { 00:12:26.139 "nbd_device": "/dev/nbd1", 00:12:26.139 "bdev_name": "Malloc1" 00:12:26.139 } 00:12:26.139 ]' 00:12:26.139 13:30:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:26.139 { 00:12:26.139 "nbd_device": "/dev/nbd0", 00:12:26.139 "bdev_name": "Malloc0" 00:12:26.139 }, 00:12:26.139 { 00:12:26.139 "nbd_device": "/dev/nbd1", 00:12:26.139 "bdev_name": "Malloc1" 00:12:26.139 } 00:12:26.139 ]' 00:12:26.139 13:30:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:26.139 13:30:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:26.139 /dev/nbd1' 00:12:26.139 13:30:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:26.139 /dev/nbd1' 00:12:26.139 13:30:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:26.139 13:30:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:26.139 13:30:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:26.139 13:30:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:26.139 13:30:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:26.139 13:30:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:26.139 13:30:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:26.139 13:30:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:26.139 13:30:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:26.139 13:30:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:26.139 13:30:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:26.139 13:30:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:26.139 256+0 records in 00:12:26.139 256+0 records out 00:12:26.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00931403 s, 113 MB/s 00:12:26.139 13:30:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:26.139 13:30:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:26.139 256+0 records in 00:12:26.139 256+0 records out 00:12:26.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214114 s, 49.0 MB/s 00:12:26.139 13:30:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:26.139 13:30:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:26.397 256+0 records in 00:12:26.397 256+0 records out 00:12:26.397 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236497 s, 44.3 MB/s 00:12:26.397 13:30:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:26.397 13:30:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:26.397 13:30:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:26.397 13:30:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:26.397 13:30:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:26.397 13:30:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:26.397 13:30:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:26.397 13:30:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:26.397 13:30:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:26.397 13:30:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:26.397 13:30:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:26.397 13:30:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:26.397 13:30:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:26.397 13:30:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:26.397 13:30:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:26.397 13:30:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:26.397 13:30:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:26.397 13:30:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:26.397 13:30:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:26.655 13:30:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:26.655 13:30:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:26.655 13:30:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:26.655 13:30:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:26.655 13:30:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:26.655 13:30:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:26.655 13:30:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:26.655 13:30:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:26.655 13:30:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:26.655 13:30:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:26.913 13:30:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:26.913 13:30:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:26.913 13:30:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:26.913 13:30:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:26.913 13:30:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:26.913 13:30:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:26.913 13:30:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:26.913 13:30:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:26.913 13:30:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:26.913 13:30:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:26.913 13:30:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:27.172 13:30:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:27.172 13:30:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:27.172 13:30:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:27.172 13:30:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:27.172 13:30:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:27.172 13:30:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:27.172 13:30:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:27.172 13:30:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:27.172 13:30:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:27.172 13:30:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:27.172 13:30:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:27.172 13:30:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:27.172 13:30:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:27.430 13:30:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:27.688 [2024-05-15 13:30:40.658609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:27.688 [2024-05-15 13:30:40.707373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.688 [2024-05-15 13:30:40.707376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.688 [2024-05-15 13:30:40.751870] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:27.688 [2024-05-15 13:30:40.751926] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:30.977 spdk_app_start Round 2 00:12:30.977 13:30:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:30.977 13:30:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:12:30.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:30.977 13:30:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 73259 /var/tmp/spdk-nbd.sock 00:12:30.977 13:30:43 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 73259 ']' 00:12:30.977 13:30:43 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:30.977 13:30:43 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:30.977 13:30:43 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:30.977 13:30:43 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:30.977 13:30:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:30.977 13:30:43 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:30.977 13:30:43 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:12:30.977 13:30:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:31.234 Malloc0 00:12:31.234 13:30:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:31.496 Malloc1 00:12:31.496 13:30:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:31.496 13:30:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:31.496 13:30:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:31.496 13:30:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:31.496 13:30:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:31.496 13:30:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:31.496 13:30:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:31.496 13:30:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:31.496 13:30:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:31.496 13:30:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:31.496 13:30:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:31.496 13:30:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:31.496 13:30:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:31.496 13:30:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:31.496 13:30:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:31.496 13:30:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:31.756 /dev/nbd0 00:12:31.756 13:30:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:31.756 13:30:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:31.756 13:30:44 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:12:31.756 13:30:44 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:12:31.756 13:30:44 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:31.756 13:30:44 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:31.756 13:30:44 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:12:31.756 13:30:44 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:12:31.756 13:30:44 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:31.756 13:30:44 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:31.756 13:30:44 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:31.756 1+0 records in 00:12:31.756 1+0 records out 00:12:31.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260716 s, 15.7 MB/s 00:12:31.756 13:30:44 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:31.756 13:30:44 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:12:31.756 13:30:44 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:31.756 13:30:44 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:31.756 13:30:44 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:12:31.756 13:30:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:31.756 13:30:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:31.756 13:30:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:32.014 /dev/nbd1 00:12:32.014 13:30:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:32.014 13:30:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:32.014 13:30:44 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:12:32.014 13:30:44 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:12:32.014 13:30:44 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:32.014 13:30:44 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:32.014 13:30:44 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:12:32.014 13:30:44 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:12:32.014 13:30:44 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:32.014 13:30:44 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:32.014 13:30:44 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:32.014 1+0 records in 00:12:32.014 1+0 records out 00:12:32.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367965 s, 11.1 MB/s 00:12:32.014 13:30:44 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:32.014 13:30:44 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:12:32.014 13:30:44 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:32.014 13:30:44 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:32.014 13:30:44 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:12:32.014 13:30:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:32.014 13:30:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:32.014 13:30:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:32.014 13:30:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:32.014 13:30:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:32.272 { 00:12:32.272 "nbd_device": "/dev/nbd0", 00:12:32.272 "bdev_name": "Malloc0" 00:12:32.272 }, 00:12:32.272 { 00:12:32.272 "nbd_device": "/dev/nbd1", 00:12:32.272 "bdev_name": "Malloc1" 00:12:32.272 } 00:12:32.272 ]' 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:32.272 { 00:12:32.272 "nbd_device": "/dev/nbd0", 00:12:32.272 "bdev_name": "Malloc0" 00:12:32.272 }, 00:12:32.272 { 00:12:32.272 "nbd_device": "/dev/nbd1", 00:12:32.272 "bdev_name": "Malloc1" 00:12:32.272 } 00:12:32.272 ]' 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:32.272 /dev/nbd1' 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:32.272 /dev/nbd1' 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:32.272 256+0 records in 00:12:32.272 256+0 records out 00:12:32.272 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00630639 s, 166 MB/s 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:32.272 256+0 records in 00:12:32.272 256+0 records out 00:12:32.272 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213091 s, 49.2 MB/s 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:32.272 256+0 records in 00:12:32.272 256+0 records out 00:12:32.272 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301646 s, 34.8 MB/s 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.272 13:30:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:32.836 13:30:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:32.836 13:30:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:32.836 13:30:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:32.836 13:30:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:32.836 13:30:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:32.836 13:30:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:32.836 13:30:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:32.836 13:30:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:32.836 13:30:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:32.836 13:30:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:33.092 13:30:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:33.092 13:30:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:33.092 13:30:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:33.092 13:30:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.092 13:30:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.092 13:30:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:33.092 13:30:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:33.092 13:30:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.092 13:30:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:33.092 13:30:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:33.092 13:30:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:33.348 13:30:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:33.348 13:30:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:33.348 13:30:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:33.349 13:30:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:33.349 13:30:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:33.349 13:30:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:33.349 13:30:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:33.349 13:30:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:33.349 13:30:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:33.349 13:30:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:33.349 13:30:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:33.349 13:30:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:33.349 13:30:46 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:33.606 13:30:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:33.864 [2024-05-15 13:30:46.784777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:33.864 [2024-05-15 13:30:46.836337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.864 [2024-05-15 13:30:46.836337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.864 [2024-05-15 13:30:46.880408] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:33.864 [2024-05-15 13:30:46.880464] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:37.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:37.141 13:30:49 event.app_repeat -- event/event.sh@38 -- # waitforlisten 73259 /var/tmp/spdk-nbd.sock 00:12:37.141 13:30:49 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 73259 ']' 00:12:37.141 13:30:49 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:37.141 13:30:49 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:37.141 13:30:49 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:37.141 13:30:49 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:37.141 13:30:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:37.141 13:30:49 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:37.141 13:30:49 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:12:37.141 13:30:49 event.app_repeat -- event/event.sh@39 -- # killprocess 73259 00:12:37.141 13:30:49 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 73259 ']' 00:12:37.141 13:30:49 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 73259 00:12:37.141 13:30:49 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:12:37.141 13:30:49 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:37.141 13:30:49 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73259 00:12:37.141 13:30:49 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:37.141 13:30:49 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:37.141 killing process with pid 73259 00:12:37.141 13:30:49 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73259' 00:12:37.141 13:30:49 event.app_repeat -- common/autotest_common.sh@965 -- # kill 73259 00:12:37.141 13:30:49 event.app_repeat -- common/autotest_common.sh@970 -- # wait 73259 00:12:37.141 spdk_app_start is called in Round 0. 00:12:37.141 Shutdown signal received, stop current app iteration 00:12:37.141 Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 reinitialization... 00:12:37.141 spdk_app_start is called in Round 1. 00:12:37.141 Shutdown signal received, stop current app iteration 00:12:37.141 Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 reinitialization... 00:12:37.141 spdk_app_start is called in Round 2. 00:12:37.141 Shutdown signal received, stop current app iteration 00:12:37.141 Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 reinitialization... 00:12:37.141 spdk_app_start is called in Round 3. 00:12:37.141 Shutdown signal received, stop current app iteration 00:12:37.141 13:30:50 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:12:37.141 13:30:50 event.app_repeat -- event/event.sh@42 -- # return 0 00:12:37.141 00:12:37.141 real 0m18.871s 00:12:37.141 user 0m42.263s 00:12:37.141 sys 0m3.318s 00:12:37.141 13:30:50 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:37.141 13:30:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:37.141 ************************************ 00:12:37.141 END TEST app_repeat 00:12:37.141 ************************************ 00:12:37.141 13:30:50 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:12:37.141 13:30:50 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:37.141 13:30:50 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:37.141 13:30:50 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:37.141 13:30:50 event -- common/autotest_common.sh@10 -- # set +x 00:12:37.141 ************************************ 00:12:37.141 START TEST cpu_locks 00:12:37.141 ************************************ 00:12:37.141 13:30:50 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:37.399 * Looking for test storage... 00:12:37.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:12:37.399 13:30:50 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:12:37.399 13:30:50 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:12:37.399 13:30:50 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:12:37.399 13:30:50 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:12:37.399 13:30:50 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:37.399 13:30:50 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:37.399 13:30:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:37.399 ************************************ 00:12:37.399 START TEST default_locks 00:12:37.399 ************************************ 00:12:37.399 13:30:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:12:37.399 13:30:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=73691 00:12:37.399 13:30:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:37.399 13:30:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 73691 00:12:37.399 13:30:50 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 73691 ']' 00:12:37.399 13:30:50 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.399 13:30:50 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:37.399 13:30:50 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.399 13:30:50 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:37.399 13:30:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:37.399 [2024-05-15 13:30:50.383676] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:37.399 [2024-05-15 13:30:50.383990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73691 ] 00:12:37.657 [2024-05-15 13:30:50.510466] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:37.657 [2024-05-15 13:30:50.524031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.657 [2024-05-15 13:30:50.600088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.270 13:30:51 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:38.270 13:30:51 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:12:38.270 13:30:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 73691 00:12:38.270 13:30:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 73691 00:12:38.270 13:30:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:38.837 13:30:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 73691 00:12:38.837 13:30:51 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 73691 ']' 00:12:38.837 13:30:51 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 73691 00:12:38.837 13:30:51 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:12:38.837 13:30:51 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:38.837 13:30:51 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73691 00:12:38.837 13:30:51 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:38.837 13:30:51 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:38.837 13:30:51 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73691' 00:12:38.837 killing process with pid 73691 00:12:38.837 13:30:51 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 73691 00:12:38.837 13:30:51 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 73691 00:12:39.094 13:30:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 73691 00:12:39.094 13:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:12:39.095 13:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 73691 00:12:39.095 13:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:12:39.095 13:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.095 13:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:12:39.095 13:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:39.095 13:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 73691 00:12:39.095 13:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 73691 ']' 00:12:39.095 13:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.095 13:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:39.095 13:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.095 13:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:39.095 13:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:39.095 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (73691) - No such process 00:12:39.095 ERROR: process (pid: 73691) is no longer running 00:12:39.095 13:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:39.095 13:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:12:39.095 13:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:12:39.095 13:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:39.095 13:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:39.095 13:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:39.095 13:30:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:12:39.095 13:30:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:39.095 13:30:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:12:39.095 13:30:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:39.095 00:12:39.095 real 0m1.754s 00:12:39.095 user 0m1.825s 00:12:39.095 sys 0m0.548s 00:12:39.095 13:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:39.095 13:30:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:39.095 ************************************ 00:12:39.095 END TEST default_locks 00:12:39.095 ************************************ 00:12:39.095 13:30:52 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:12:39.095 13:30:52 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:39.095 13:30:52 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:39.095 13:30:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:39.095 ************************************ 00:12:39.095 START TEST default_locks_via_rpc 00:12:39.095 ************************************ 00:12:39.095 13:30:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:12:39.095 13:30:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=73743 00:12:39.095 13:30:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 73743 00:12:39.095 13:30:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:39.095 13:30:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 73743 ']' 00:12:39.095 13:30:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.095 13:30:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:39.095 13:30:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.095 13:30:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:39.095 13:30:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.095 [2024-05-15 13:30:52.158326] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:39.095 [2024-05-15 13:30:52.158644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73743 ] 00:12:39.352 [2024-05-15 13:30:52.284835] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:39.352 [2024-05-15 13:30:52.305264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.352 [2024-05-15 13:30:52.364542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.285 13:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:40.285 13:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:40.285 13:30:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:12:40.285 13:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.285 13:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.285 13:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.285 13:30:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:12:40.285 13:30:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:40.285 13:30:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:12:40.285 13:30:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:40.285 13:30:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:12:40.285 13:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.285 13:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.285 13:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.285 13:30:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 73743 00:12:40.285 13:30:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 73743 00:12:40.285 13:30:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:40.546 13:30:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 73743 00:12:40.546 13:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 73743 ']' 00:12:40.546 13:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 73743 00:12:40.546 13:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:12:40.546 13:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:40.546 13:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73743 00:12:40.546 13:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:40.546 13:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:40.546 13:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73743' 00:12:40.546 killing process with pid 73743 00:12:40.546 13:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 73743 00:12:40.546 13:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 73743 00:12:41.138 00:12:41.138 real 0m1.847s 00:12:41.138 user 0m2.020s 00:12:41.138 sys 0m0.544s 00:12:41.138 13:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:41.138 13:30:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.138 ************************************ 00:12:41.138 END TEST default_locks_via_rpc 00:12:41.138 ************************************ 00:12:41.138 13:30:53 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:12:41.138 13:30:53 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:41.138 13:30:53 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:41.138 13:30:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:41.138 ************************************ 00:12:41.138 START TEST non_locking_app_on_locked_coremask 00:12:41.138 ************************************ 00:12:41.138 13:30:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:12:41.138 13:30:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=73789 00:12:41.138 13:30:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:41.138 13:30:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 73789 /var/tmp/spdk.sock 00:12:41.138 13:30:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 73789 ']' 00:12:41.138 13:30:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.138 13:30:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:41.138 13:30:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.138 13:30:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:41.138 13:30:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:41.138 [2024-05-15 13:30:54.058883] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:41.138 [2024-05-15 13:30:54.059625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73789 ] 00:12:41.138 [2024-05-15 13:30:54.185503] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:41.138 [2024-05-15 13:30:54.200900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.396 [2024-05-15 13:30:54.257703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.396 13:30:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:41.396 13:30:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:12:41.396 13:30:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=73797 00:12:41.396 13:30:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:12:41.396 13:30:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 73797 /var/tmp/spdk2.sock 00:12:41.396 13:30:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 73797 ']' 00:12:41.396 13:30:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:41.396 13:30:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:41.396 13:30:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:41.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:41.396 13:30:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:41.396 13:30:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:41.654 [2024-05-15 13:30:54.528495] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:41.654 [2024-05-15 13:30:54.528871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73797 ] 00:12:41.654 [2024-05-15 13:30:54.656943] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:41.654 [2024-05-15 13:30:54.683068] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:41.654 [2024-05-15 13:30:54.683139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.912 [2024-05-15 13:30:54.788475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.477 13:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:42.477 13:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:12:42.477 13:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 73789 00:12:42.477 13:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 73789 00:12:42.477 13:30:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:43.410 13:30:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 73789 00:12:43.410 13:30:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 73789 ']' 00:12:43.410 13:30:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 73789 00:12:43.411 13:30:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:12:43.411 13:30:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:43.411 13:30:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73789 00:12:43.411 killing process with pid 73789 00:12:43.411 13:30:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:43.411 13:30:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:43.411 13:30:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73789' 00:12:43.411 13:30:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 73789 00:12:43.411 13:30:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 73789 00:12:44.009 13:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 73797 00:12:44.009 13:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 73797 ']' 00:12:44.009 13:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 73797 00:12:44.009 13:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:12:44.009 13:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:44.009 13:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73797 00:12:44.009 13:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:44.009 13:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:44.009 13:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73797' 00:12:44.009 killing process with pid 73797 00:12:44.009 13:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 73797 00:12:44.009 13:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 73797 00:12:44.573 00:12:44.573 real 0m3.449s 00:12:44.573 user 0m3.795s 00:12:44.573 sys 0m1.066s 00:12:44.573 13:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:44.573 13:30:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:44.573 ************************************ 00:12:44.573 END TEST non_locking_app_on_locked_coremask 00:12:44.573 ************************************ 00:12:44.573 13:30:57 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:12:44.573 13:30:57 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:44.573 13:30:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:44.573 13:30:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:44.573 ************************************ 00:12:44.573 START TEST locking_app_on_unlocked_coremask 00:12:44.573 ************************************ 00:12:44.573 13:30:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:12:44.573 13:30:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=73864 00:12:44.573 13:30:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:12:44.573 13:30:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 73864 /var/tmp/spdk.sock 00:12:44.573 13:30:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 73864 ']' 00:12:44.573 13:30:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.573 13:30:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:44.573 13:30:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.574 13:30:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:44.574 13:30:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:44.574 [2024-05-15 13:30:57.582086] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:44.574 [2024-05-15 13:30:57.582562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73864 ] 00:12:44.831 [2024-05-15 13:30:57.710398] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:44.831 [2024-05-15 13:30:57.729535] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:44.831 [2024-05-15 13:30:57.729844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.831 [2024-05-15 13:30:57.788954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:45.765 13:30:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:45.765 13:30:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:12:45.765 13:30:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=73880 00:12:45.765 13:30:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:45.765 13:30:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 73880 /var/tmp/spdk2.sock 00:12:45.765 13:30:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 73880 ']' 00:12:45.765 13:30:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:45.765 13:30:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:45.765 13:30:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:45.765 13:30:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:45.765 13:30:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:45.765 [2024-05-15 13:30:58.597628] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:45.765 [2024-05-15 13:30:58.598046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73880 ] 00:12:45.765 [2024-05-15 13:30:58.734957] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:45.765 [2024-05-15 13:30:58.746130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.765 [2024-05-15 13:30:58.850560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.699 13:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:46.699 13:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:12:46.699 13:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 73880 00:12:46.699 13:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 73880 00:12:46.699 13:30:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:47.644 13:31:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 73864 00:12:47.644 13:31:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 73864 ']' 00:12:47.644 13:31:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 73864 00:12:47.644 13:31:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:12:47.644 13:31:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:47.644 13:31:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73864 00:12:47.644 13:31:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:47.644 killing process with pid 73864 00:12:47.644 13:31:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:47.644 13:31:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73864' 00:12:47.644 13:31:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 73864 00:12:47.644 13:31:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 73864 00:12:48.210 13:31:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 73880 00:12:48.210 13:31:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 73880 ']' 00:12:48.210 13:31:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 73880 00:12:48.210 13:31:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:12:48.210 13:31:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:48.210 13:31:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73880 00:12:48.210 killing process with pid 73880 00:12:48.210 13:31:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:48.210 13:31:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:48.210 13:31:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73880' 00:12:48.210 13:31:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 73880 00:12:48.210 13:31:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 73880 00:12:48.469 00:12:48.469 real 0m4.057s 00:12:48.469 user 0m4.609s 00:12:48.469 sys 0m1.144s 00:12:48.469 13:31:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:48.469 13:31:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:48.469 ************************************ 00:12:48.469 END TEST locking_app_on_unlocked_coremask 00:12:48.469 ************************************ 00:12:48.726 13:31:01 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:12:48.726 13:31:01 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:48.726 13:31:01 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:48.727 13:31:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:48.727 ************************************ 00:12:48.727 START TEST locking_app_on_locked_coremask 00:12:48.727 ************************************ 00:12:48.727 13:31:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:12:48.727 13:31:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=73947 00:12:48.727 13:31:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:48.727 13:31:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 73947 /var/tmp/spdk.sock 00:12:48.727 13:31:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 73947 ']' 00:12:48.727 13:31:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.727 13:31:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:48.727 13:31:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.727 13:31:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:48.727 13:31:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:48.727 [2024-05-15 13:31:01.657845] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:48.727 [2024-05-15 13:31:01.658121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73947 ] 00:12:48.727 [2024-05-15 13:31:01.778768] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:48.727 [2024-05-15 13:31:01.793176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.984 [2024-05-15 13:31:01.848782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.984 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:48.984 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:12:48.984 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=73956 00:12:48.984 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 73956 /var/tmp/spdk2.sock 00:12:48.984 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:12:48.984 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:48.984 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 73956 /var/tmp/spdk2.sock 00:12:48.984 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:12:48.984 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:48.984 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:12:48.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:48.984 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:48.984 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 73956 /var/tmp/spdk2.sock 00:12:48.984 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 73956 ']' 00:12:48.984 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:48.984 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:48.984 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:48.984 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:48.984 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:49.247 [2024-05-15 13:31:02.120588] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:49.247 [2024-05-15 13:31:02.120933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73956 ] 00:12:49.247 [2024-05-15 13:31:02.244902] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:49.247 [2024-05-15 13:31:02.270251] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 73947 has claimed it. 00:12:49.247 [2024-05-15 13:31:02.270342] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:49.818 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (73956) - No such process 00:12:49.818 ERROR: process (pid: 73956) is no longer running 00:12:49.818 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:49.818 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:12:49.818 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:12:49.818 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:49.818 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:49.818 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:49.818 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 73947 00:12:49.818 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 73947 00:12:49.818 13:31:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:50.384 13:31:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 73947 00:12:50.384 13:31:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 73947 ']' 00:12:50.384 13:31:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 73947 00:12:50.384 13:31:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:12:50.384 13:31:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:50.384 13:31:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73947 00:12:50.384 13:31:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:50.384 13:31:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:50.384 13:31:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73947' 00:12:50.385 killing process with pid 73947 00:12:50.385 13:31:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 73947 00:12:50.385 13:31:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 73947 00:12:50.643 00:12:50.643 real 0m2.135s 00:12:50.643 user 0m2.403s 00:12:50.643 sys 0m0.621s 00:12:50.643 13:31:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:50.643 13:31:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:50.643 ************************************ 00:12:50.643 END TEST locking_app_on_locked_coremask 00:12:50.643 ************************************ 00:12:50.901 13:31:03 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:12:50.901 13:31:03 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:50.901 13:31:03 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:50.901 13:31:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:50.901 ************************************ 00:12:50.901 START TEST locking_overlapped_coremask 00:12:50.901 ************************************ 00:12:50.901 13:31:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:12:50.901 13:31:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=74001 00:12:50.901 13:31:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 74001 /var/tmp/spdk.sock 00:12:50.901 13:31:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 74001 ']' 00:12:50.901 13:31:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:12:50.901 13:31:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.901 13:31:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:50.901 13:31:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.901 13:31:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:50.901 13:31:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:50.901 [2024-05-15 13:31:03.866048] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:50.901 [2024-05-15 13:31:03.866460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74001 ] 00:12:50.901 [2024-05-15 13:31:03.994333] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:51.159 [2024-05-15 13:31:04.007491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:51.159 [2024-05-15 13:31:04.063965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.159 [2024-05-15 13:31:04.064147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.159 [2024-05-15 13:31:04.064148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.724 13:31:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:51.724 13:31:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:12:51.724 13:31:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=74019 00:12:51.724 13:31:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:12:51.724 13:31:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 74019 /var/tmp/spdk2.sock 00:12:51.724 13:31:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:12:51.724 13:31:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 74019 /var/tmp/spdk2.sock 00:12:51.724 13:31:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:12:51.724 13:31:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.724 13:31:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:12:51.724 13:31:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.724 13:31:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 74019 /var/tmp/spdk2.sock 00:12:51.724 13:31:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 74019 ']' 00:12:51.724 13:31:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:51.724 13:31:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:51.724 13:31:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:51.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:51.724 13:31:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:51.724 13:31:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:51.724 [2024-05-15 13:31:04.816190] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:51.724 [2024-05-15 13:31:04.816535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74019 ] 00:12:51.982 [2024-05-15 13:31:04.953037] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:51.982 [2024-05-15 13:31:04.963592] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 74001 has claimed it. 00:12:51.982 [2024-05-15 13:31:04.963658] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:12:52.548 ERROR: process (pid: 74019) is no longer running 00:12:52.548 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (74019) - No such process 00:12:52.548 13:31:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:52.549 13:31:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:12:52.549 13:31:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:12:52.549 13:31:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:52.549 13:31:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:52.549 13:31:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:52.549 13:31:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:12:52.549 13:31:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:52.549 13:31:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:52.549 13:31:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:52.549 13:31:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 74001 00:12:52.549 13:31:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 74001 ']' 00:12:52.549 13:31:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 74001 00:12:52.549 13:31:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:12:52.549 13:31:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:52.549 13:31:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74001 00:12:52.549 killing process with pid 74001 00:12:52.549 13:31:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:52.549 13:31:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:52.549 13:31:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74001' 00:12:52.549 13:31:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 74001 00:12:52.549 13:31:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 74001 00:12:53.114 ************************************ 00:12:53.114 END TEST locking_overlapped_coremask 00:12:53.114 ************************************ 00:12:53.114 00:12:53.114 real 0m2.169s 00:12:53.114 user 0m6.058s 00:12:53.114 sys 0m0.452s 00:12:53.114 13:31:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:53.114 13:31:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:53.114 13:31:06 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:12:53.114 13:31:06 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:53.114 13:31:06 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:53.114 13:31:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:53.114 ************************************ 00:12:53.114 START TEST locking_overlapped_coremask_via_rpc 00:12:53.114 ************************************ 00:12:53.114 13:31:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:12:53.114 13:31:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=74059 00:12:53.114 13:31:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:12:53.114 13:31:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 74059 /var/tmp/spdk.sock 00:12:53.114 13:31:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 74059 ']' 00:12:53.114 13:31:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.114 13:31:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:53.114 13:31:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.114 13:31:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:53.114 13:31:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.114 [2024-05-15 13:31:06.084867] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:53.114 [2024-05-15 13:31:06.085197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74059 ] 00:12:53.397 [2024-05-15 13:31:06.216291] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:53.397 [2024-05-15 13:31:06.232356] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:53.397 [2024-05-15 13:31:06.232606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:53.397 [2024-05-15 13:31:06.292463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.397 [2024-05-15 13:31:06.292567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.397 [2024-05-15 13:31:06.292579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.656 13:31:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:53.656 13:31:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:53.656 13:31:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=74070 00:12:53.656 13:31:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:12:53.656 13:31:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 74070 /var/tmp/spdk2.sock 00:12:53.656 13:31:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 74070 ']' 00:12:53.656 13:31:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:53.656 13:31:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:53.656 13:31:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:53.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:53.656 13:31:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:53.656 13:31:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.656 [2024-05-15 13:31:06.573586] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:53.656 [2024-05-15 13:31:06.573912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74070 ] 00:12:53.656 [2024-05-15 13:31:06.710144] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:53.656 [2024-05-15 13:31:06.721416] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:53.656 [2024-05-15 13:31:06.721475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:53.914 [2024-05-15 13:31:06.844437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.914 [2024-05-15 13:31:06.844511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.914 [2024-05-15 13:31:06.844511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.848 [2024-05-15 13:31:07.637369] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 74059 has claimed it. 00:12:54.848 request: 00:12:54.848 { 00:12:54.848 "method": "framework_enable_cpumask_locks", 00:12:54.848 "req_id": 1 00:12:54.848 } 00:12:54.848 Got JSON-RPC error response 00:12:54.848 response: 00:12:54.848 { 00:12:54.848 "code": -32603, 00:12:54.848 "message": "Failed to claim CPU core: 2" 00:12:54.848 } 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 74059 /var/tmp/spdk.sock 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 74059 ']' 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 74070 /var/tmp/spdk2.sock 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 74070 ']' 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:54.848 13:31:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.418 ************************************ 00:12:55.418 END TEST locking_overlapped_coremask_via_rpc 00:12:55.418 ************************************ 00:12:55.418 13:31:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:55.418 13:31:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:55.418 13:31:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:12:55.418 13:31:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:12:55.418 13:31:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:12:55.418 13:31:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:12:55.418 00:12:55.418 real 0m2.217s 00:12:55.418 user 0m1.337s 00:12:55.418 sys 0m0.192s 00:12:55.418 13:31:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:55.418 13:31:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.418 13:31:08 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:12:55.418 13:31:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 74059 ]] 00:12:55.418 13:31:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 74059 00:12:55.418 13:31:08 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 74059 ']' 00:12:55.418 13:31:08 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 74059 00:12:55.418 13:31:08 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:12:55.418 13:31:08 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:55.418 13:31:08 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74059 00:12:55.418 killing process with pid 74059 00:12:55.418 13:31:08 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:55.418 13:31:08 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:55.418 13:31:08 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74059' 00:12:55.418 13:31:08 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 74059 00:12:55.418 13:31:08 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 74059 00:12:55.746 13:31:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 74070 ]] 00:12:55.746 13:31:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 74070 00:12:55.746 13:31:08 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 74070 ']' 00:12:55.746 13:31:08 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 74070 00:12:55.746 13:31:08 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:12:55.746 13:31:08 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:55.746 13:31:08 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74070 00:12:55.746 killing process with pid 74070 00:12:55.746 13:31:08 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:12:55.746 13:31:08 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:12:55.746 13:31:08 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74070' 00:12:55.746 13:31:08 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 74070 00:12:55.746 13:31:08 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 74070 00:12:56.004 13:31:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:56.004 13:31:09 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:12:56.004 13:31:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 74059 ]] 00:12:56.004 13:31:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 74059 00:12:56.004 13:31:09 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 74059 ']' 00:12:56.004 13:31:09 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 74059 00:12:56.004 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (74059) - No such process 00:12:56.004 13:31:09 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 74059 is not found' 00:12:56.004 Process with pid 74059 is not found 00:12:56.004 13:31:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 74070 ]] 00:12:56.004 13:31:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 74070 00:12:56.004 13:31:09 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 74070 ']' 00:12:56.004 13:31:09 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 74070 00:12:56.004 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (74070) - No such process 00:12:56.004 13:31:09 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 74070 is not found' 00:12:56.004 Process with pid 74070 is not found 00:12:56.004 13:31:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:12:56.004 ************************************ 00:12:56.004 END TEST cpu_locks 00:12:56.004 ************************************ 00:12:56.004 00:12:56.004 real 0m18.820s 00:12:56.004 user 0m33.340s 00:12:56.004 sys 0m5.372s 00:12:56.004 13:31:09 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:56.004 13:31:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:56.004 ************************************ 00:12:56.004 END TEST event 00:12:56.004 ************************************ 00:12:56.004 00:12:56.004 real 0m46.964s 00:12:56.004 user 1m31.852s 00:12:56.004 sys 0m9.527s 00:12:56.004 13:31:09 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:56.004 13:31:09 event -- common/autotest_common.sh@10 -- # set +x 00:12:56.004 13:31:09 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:56.004 13:31:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:56.004 13:31:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:56.004 13:31:09 -- common/autotest_common.sh@10 -- # set +x 00:12:56.262 ************************************ 00:12:56.262 START TEST thread 00:12:56.262 ************************************ 00:12:56.262 13:31:09 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:12:56.262 * Looking for test storage... 00:12:56.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:12:56.262 13:31:09 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:56.262 13:31:09 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:12:56.262 13:31:09 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:56.262 13:31:09 thread -- common/autotest_common.sh@10 -- # set +x 00:12:56.262 ************************************ 00:12:56.262 START TEST thread_poller_perf 00:12:56.262 ************************************ 00:12:56.262 13:31:09 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:12:56.262 [2024-05-15 13:31:09.220721] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:56.262 [2024-05-15 13:31:09.221013] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74198 ] 00:12:56.262 [2024-05-15 13:31:09.343449] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:56.520 [2024-05-15 13:31:09.362873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.520 [2024-05-15 13:31:09.411171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.520 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:12:57.457 ====================================== 00:12:57.457 busy:2109705990 (cyc) 00:12:57.457 total_run_count: 369000 00:12:57.457 tsc_hz: 2100000000 (cyc) 00:12:57.457 ====================================== 00:12:57.457 poller_cost: 5717 (cyc), 2722 (nsec) 00:12:57.457 00:12:57.457 real 0m1.281s 00:12:57.457 user 0m1.121s 00:12:57.457 sys 0m0.053s 00:12:57.457 13:31:10 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:57.457 ************************************ 00:12:57.457 END TEST thread_poller_perf 00:12:57.457 ************************************ 00:12:57.457 13:31:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:57.457 13:31:10 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:57.457 13:31:10 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:12:57.457 13:31:10 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:57.457 13:31:10 thread -- common/autotest_common.sh@10 -- # set +x 00:12:57.457 ************************************ 00:12:57.457 START TEST thread_poller_perf 00:12:57.457 ************************************ 00:12:57.457 13:31:10 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:12:57.714 [2024-05-15 13:31:10.568944] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:57.714 [2024-05-15 13:31:10.569291] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74228 ] 00:12:57.714 [2024-05-15 13:31:10.695044] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:57.714 [2024-05-15 13:31:10.710658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.714 [2024-05-15 13:31:10.758816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.714 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:12:59.088 ====================================== 00:12:59.088 busy:2101971120 (cyc) 00:12:59.088 total_run_count: 4865000 00:12:59.088 tsc_hz: 2100000000 (cyc) 00:12:59.088 ====================================== 00:12:59.088 poller_cost: 432 (cyc), 205 (nsec) 00:12:59.088 ************************************ 00:12:59.088 END TEST thread_poller_perf 00:12:59.088 ************************************ 00:12:59.088 00:12:59.088 real 0m1.278s 00:12:59.088 user 0m1.118s 00:12:59.088 sys 0m0.052s 00:12:59.088 13:31:11 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:59.088 13:31:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:12:59.088 13:31:11 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:12:59.088 ************************************ 00:12:59.088 END TEST thread 00:12:59.088 ************************************ 00:12:59.088 00:12:59.088 real 0m2.772s 00:12:59.088 user 0m2.313s 00:12:59.088 sys 0m0.240s 00:12:59.088 13:31:11 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:59.088 13:31:11 thread -- common/autotest_common.sh@10 -- # set +x 00:12:59.088 13:31:11 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:12:59.088 13:31:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:59.088 13:31:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:59.088 13:31:11 -- common/autotest_common.sh@10 -- # set +x 00:12:59.088 ************************************ 00:12:59.088 START TEST accel 00:12:59.088 ************************************ 00:12:59.088 13:31:11 accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:12:59.088 * Looking for test storage... 00:12:59.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:12:59.088 13:31:12 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:12:59.088 13:31:12 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:12:59.088 13:31:12 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:59.088 13:31:12 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=74302 00:12:59.088 13:31:12 accel -- accel/accel.sh@63 -- # waitforlisten 74302 00:12:59.088 13:31:12 accel -- common/autotest_common.sh@827 -- # '[' -z 74302 ']' 00:12:59.088 13:31:12 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.088 13:31:12 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:59.088 13:31:12 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:12:59.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.088 13:31:12 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.088 13:31:12 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:59.088 13:31:12 accel -- common/autotest_common.sh@10 -- # set +x 00:12:59.088 13:31:12 accel -- accel/accel.sh@61 -- # build_accel_config 00:12:59.088 13:31:12 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:59.088 13:31:12 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:59.088 13:31:12 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:59.088 13:31:12 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:59.088 13:31:12 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:59.088 13:31:12 accel -- accel/accel.sh@40 -- # local IFS=, 00:12:59.088 13:31:12 accel -- accel/accel.sh@41 -- # jq -r . 00:12:59.088 [2024-05-15 13:31:12.071369] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:59.088 [2024-05-15 13:31:12.072003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74302 ] 00:12:59.346 [2024-05-15 13:31:12.192888] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:59.346 [2024-05-15 13:31:12.203397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.346 [2024-05-15 13:31:12.253375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.279 13:31:13 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:00.279 13:31:13 accel -- common/autotest_common.sh@860 -- # return 0 00:13:00.279 13:31:13 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:13:00.279 13:31:13 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:13:00.279 13:31:13 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:13:00.279 13:31:13 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:13:00.279 13:31:13 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:13:00.279 13:31:13 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:13:00.279 13:31:13 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:13:00.279 13:31:13 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.279 13:31:13 accel -- common/autotest_common.sh@10 -- # set +x 00:13:00.279 13:31:13 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.279 13:31:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:00.279 13:31:13 accel -- accel/accel.sh@72 -- # IFS== 00:13:00.279 13:31:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:00.279 13:31:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:00.279 13:31:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:00.279 13:31:13 accel -- accel/accel.sh@72 -- # IFS== 00:13:00.279 13:31:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:00.279 13:31:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:00.279 13:31:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:00.279 13:31:13 accel -- accel/accel.sh@72 -- # IFS== 00:13:00.279 13:31:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:00.279 13:31:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:00.279 13:31:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:00.279 13:31:13 accel -- accel/accel.sh@72 -- # IFS== 00:13:00.279 13:31:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:00.279 13:31:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:00.279 13:31:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:00.279 13:31:13 accel -- accel/accel.sh@72 -- # IFS== 00:13:00.279 13:31:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:00.279 13:31:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:00.279 13:31:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:00.279 13:31:13 accel -- accel/accel.sh@72 -- # IFS== 00:13:00.279 13:31:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:00.279 13:31:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:00.279 13:31:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:00.279 13:31:13 accel -- accel/accel.sh@72 -- # IFS== 00:13:00.279 13:31:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:00.279 13:31:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:00.280 13:31:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:00.280 13:31:13 accel -- accel/accel.sh@72 -- # IFS== 00:13:00.280 13:31:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:00.280 13:31:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:00.280 13:31:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:00.280 13:31:13 accel -- accel/accel.sh@72 -- # IFS== 00:13:00.280 13:31:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:00.280 13:31:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:00.280 13:31:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:00.280 13:31:13 accel -- accel/accel.sh@72 -- # IFS== 00:13:00.280 13:31:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:00.280 13:31:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:00.280 13:31:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:00.280 13:31:13 accel -- accel/accel.sh@72 -- # IFS== 00:13:00.280 13:31:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:00.280 13:31:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:00.280 13:31:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:00.280 13:31:13 accel -- accel/accel.sh@72 -- # IFS== 00:13:00.280 13:31:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:00.280 13:31:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:00.280 13:31:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:00.280 13:31:13 accel -- accel/accel.sh@72 -- # IFS== 00:13:00.280 13:31:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:00.280 13:31:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:00.280 13:31:13 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:00.280 13:31:13 accel -- accel/accel.sh@72 -- # IFS== 00:13:00.280 13:31:13 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:00.280 13:31:13 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:00.280 13:31:13 accel -- accel/accel.sh@75 -- # killprocess 74302 00:13:00.280 13:31:13 accel -- common/autotest_common.sh@946 -- # '[' -z 74302 ']' 00:13:00.280 13:31:13 accel -- common/autotest_common.sh@950 -- # kill -0 74302 00:13:00.280 13:31:13 accel -- common/autotest_common.sh@951 -- # uname 00:13:00.280 13:31:13 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:00.280 13:31:13 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74302 00:13:00.280 13:31:13 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:00.280 13:31:13 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:00.280 13:31:13 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74302' 00:13:00.280 killing process with pid 74302 00:13:00.280 13:31:13 accel -- common/autotest_common.sh@965 -- # kill 74302 00:13:00.280 13:31:13 accel -- common/autotest_common.sh@970 -- # wait 74302 00:13:00.536 13:31:13 accel -- accel/accel.sh@76 -- # trap - ERR 00:13:00.536 13:31:13 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:13:00.536 13:31:13 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:00.536 13:31:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:00.536 13:31:13 accel -- common/autotest_common.sh@10 -- # set +x 00:13:00.536 13:31:13 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:13:00.536 13:31:13 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:13:00.536 13:31:13 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:13:00.536 13:31:13 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:00.536 13:31:13 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:00.536 13:31:13 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:00.536 13:31:13 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:00.536 13:31:13 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:00.536 13:31:13 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:13:00.536 13:31:13 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:13:00.536 13:31:13 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:00.536 13:31:13 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:13:00.536 13:31:13 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:13:00.536 13:31:13 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:00.536 13:31:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:00.536 13:31:13 accel -- common/autotest_common.sh@10 -- # set +x 00:13:00.536 ************************************ 00:13:00.536 START TEST accel_missing_filename 00:13:00.536 ************************************ 00:13:00.536 13:31:13 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:13:00.536 13:31:13 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:13:00.536 13:31:13 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:13:00.536 13:31:13 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:13:00.536 13:31:13 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:00.536 13:31:13 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:13:00.536 13:31:13 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:00.536 13:31:13 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:13:00.536 13:31:13 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:13:00.536 13:31:13 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:13:00.536 13:31:13 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:00.536 13:31:13 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:00.536 13:31:13 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:00.536 13:31:13 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:00.536 13:31:13 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:00.536 13:31:13 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:13:00.536 13:31:13 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:13:00.536 [2024-05-15 13:31:13.574022] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:00.536 [2024-05-15 13:31:13.574386] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74357 ] 00:13:00.792 [2024-05-15 13:31:13.700569] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:00.792 [2024-05-15 13:31:13.720640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.792 [2024-05-15 13:31:13.777731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.792 [2024-05-15 13:31:13.828376] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:01.049 [2024-05-15 13:31:13.895806] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:13:01.049 A filename is required. 00:13:01.049 13:31:13 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:13:01.049 13:31:13 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:01.049 13:31:13 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:13:01.049 13:31:13 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:13:01.049 13:31:13 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:13:01.049 13:31:13 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:01.049 00:13:01.049 real 0m0.429s 00:13:01.049 user 0m0.266s 00:13:01.049 sys 0m0.118s 00:13:01.049 13:31:13 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:01.049 13:31:13 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:13:01.049 ************************************ 00:13:01.049 END TEST accel_missing_filename 00:13:01.049 ************************************ 00:13:01.049 13:31:14 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:01.049 13:31:14 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:13:01.049 13:31:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:01.049 13:31:14 accel -- common/autotest_common.sh@10 -- # set +x 00:13:01.049 ************************************ 00:13:01.049 START TEST accel_compress_verify 00:13:01.049 ************************************ 00:13:01.049 13:31:14 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:01.049 13:31:14 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:13:01.049 13:31:14 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:01.050 13:31:14 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:13:01.050 13:31:14 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:01.050 13:31:14 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:13:01.050 13:31:14 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:01.050 13:31:14 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:01.050 13:31:14 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:01.050 13:31:14 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:13:01.050 13:31:14 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:01.050 13:31:14 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:01.050 13:31:14 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:01.050 13:31:14 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:01.050 13:31:14 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:01.050 13:31:14 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:13:01.050 13:31:14 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:13:01.050 [2024-05-15 13:31:14.052858] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:01.050 [2024-05-15 13:31:14.053688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74381 ] 00:13:01.307 [2024-05-15 13:31:14.180140] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:01.307 [2024-05-15 13:31:14.196971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.307 [2024-05-15 13:31:14.265380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.307 [2024-05-15 13:31:14.325991] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:01.307 [2024-05-15 13:31:14.390523] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:13:01.565 00:13:01.565 Compression does not support the verify option, aborting. 00:13:01.565 13:31:14 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:13:01.565 13:31:14 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:01.565 13:31:14 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:13:01.565 13:31:14 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:13:01.565 13:31:14 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:13:01.565 13:31:14 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:01.565 00:13:01.565 real 0m0.436s 00:13:01.565 user 0m0.238s 00:13:01.565 sys 0m0.123s 00:13:01.565 13:31:14 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:01.565 13:31:14 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:13:01.565 ************************************ 00:13:01.565 END TEST accel_compress_verify 00:13:01.565 ************************************ 00:13:01.565 13:31:14 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:13:01.565 13:31:14 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:01.565 13:31:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:01.565 13:31:14 accel -- common/autotest_common.sh@10 -- # set +x 00:13:01.565 ************************************ 00:13:01.565 START TEST accel_wrong_workload 00:13:01.565 ************************************ 00:13:01.565 13:31:14 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:13:01.565 13:31:14 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:13:01.565 13:31:14 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:13:01.565 13:31:14 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:13:01.565 13:31:14 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:01.565 13:31:14 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:13:01.565 13:31:14 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:01.565 13:31:14 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:13:01.565 13:31:14 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:13:01.565 13:31:14 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:13:01.565 13:31:14 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:01.565 13:31:14 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:01.565 13:31:14 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:01.565 13:31:14 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:01.565 13:31:14 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:01.565 13:31:14 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:13:01.565 13:31:14 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:13:01.565 Unsupported workload type: foobar 00:13:01.565 [2024-05-15 13:31:14.544881] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:13:01.565 accel_perf options: 00:13:01.565 [-h help message] 00:13:01.565 [-q queue depth per core] 00:13:01.565 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:13:01.565 [-T number of threads per core 00:13:01.565 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:13:01.565 [-t time in seconds] 00:13:01.565 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:13:01.565 [ dif_verify, , dif_generate, dif_generate_copy 00:13:01.565 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:13:01.565 [-l for compress/decompress workloads, name of uncompressed input file 00:13:01.565 [-S for crc32c workload, use this seed value (default 0) 00:13:01.565 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:13:01.565 [-f for fill workload, use this BYTE value (default 255) 00:13:01.565 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:13:01.565 [-y verify result if this switch is on] 00:13:01.565 [-a tasks to allocate per core (default: same value as -q)] 00:13:01.565 Can be used to spread operations across a wider range of memory. 00:13:01.565 13:31:14 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:13:01.565 13:31:14 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:01.565 13:31:14 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:01.565 13:31:14 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:01.565 00:13:01.565 real 0m0.034s 00:13:01.565 user 0m0.014s 00:13:01.565 sys 0m0.018s 00:13:01.565 13:31:14 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:01.565 13:31:14 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:13:01.565 ************************************ 00:13:01.565 END TEST accel_wrong_workload 00:13:01.565 ************************************ 00:13:01.565 13:31:14 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:13:01.565 13:31:14 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:13:01.565 13:31:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:01.565 13:31:14 accel -- common/autotest_common.sh@10 -- # set +x 00:13:01.565 ************************************ 00:13:01.565 START TEST accel_negative_buffers 00:13:01.565 ************************************ 00:13:01.565 13:31:14 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:13:01.565 13:31:14 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:13:01.565 13:31:14 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:13:01.565 13:31:14 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:13:01.565 13:31:14 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:01.565 13:31:14 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:13:01.565 13:31:14 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:01.565 13:31:14 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:13:01.565 13:31:14 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:13:01.565 13:31:14 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:13:01.565 13:31:14 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:01.565 13:31:14 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:01.565 13:31:14 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:01.565 13:31:14 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:01.565 13:31:14 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:01.565 13:31:14 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:13:01.565 13:31:14 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:13:01.565 -x option must be non-negative. 00:13:01.566 [2024-05-15 13:31:14.633370] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:13:01.566 accel_perf options: 00:13:01.566 [-h help message] 00:13:01.566 [-q queue depth per core] 00:13:01.566 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:13:01.566 [-T number of threads per core 00:13:01.566 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:13:01.566 [-t time in seconds] 00:13:01.566 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:13:01.566 [ dif_verify, , dif_generate, dif_generate_copy 00:13:01.566 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:13:01.566 [-l for compress/decompress workloads, name of uncompressed input file 00:13:01.566 [-S for crc32c workload, use this seed value (default 0) 00:13:01.566 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:13:01.566 [-f for fill workload, use this BYTE value (default 255) 00:13:01.566 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:13:01.566 [-y verify result if this switch is on] 00:13:01.566 [-a tasks to allocate per core (default: same value as -q)] 00:13:01.566 Can be used to spread operations across a wider range of memory. 00:13:01.566 13:31:14 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:13:01.566 13:31:14 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:01.566 13:31:14 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:01.566 13:31:14 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:01.566 00:13:01.566 real 0m0.039s 00:13:01.566 user 0m0.020s 00:13:01.566 sys 0m0.016s 00:13:01.566 13:31:14 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:01.566 13:31:14 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:13:01.566 ************************************ 00:13:01.566 END TEST accel_negative_buffers 00:13:01.566 ************************************ 00:13:01.904 13:31:14 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:13:01.904 13:31:14 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:13:01.904 13:31:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:01.904 13:31:14 accel -- common/autotest_common.sh@10 -- # set +x 00:13:01.904 ************************************ 00:13:01.904 START TEST accel_crc32c 00:13:01.904 ************************************ 00:13:01.904 13:31:14 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:13:01.904 [2024-05-15 13:31:14.720159] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:01.904 [2024-05-15 13:31:14.720480] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74440 ] 00:13:01.904 [2024-05-15 13:31:14.848365] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:01.904 [2024-05-15 13:31:14.864965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.904 [2024-05-15 13:31:14.919733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:13:01.904 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:02.165 13:31:14 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:13:03.103 13:31:16 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:03.103 00:13:03.103 real 0m1.413s 00:13:03.103 user 0m1.207s 00:13:03.103 sys 0m0.107s 00:13:03.103 13:31:16 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:03.103 13:31:16 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:13:03.103 ************************************ 00:13:03.103 END TEST accel_crc32c 00:13:03.103 ************************************ 00:13:03.103 13:31:16 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:13:03.103 13:31:16 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:13:03.103 13:31:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:03.103 13:31:16 accel -- common/autotest_common.sh@10 -- # set +x 00:13:03.103 ************************************ 00:13:03.103 START TEST accel_crc32c_C2 00:13:03.103 ************************************ 00:13:03.103 13:31:16 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:13:03.103 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:13:03.103 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:13:03.103 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:03.103 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:03.103 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:13:03.103 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:13:03.103 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:13:03.103 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:03.103 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:03.103 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:03.103 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:03.103 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:03.103 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:13:03.103 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:13:03.103 [2024-05-15 13:31:16.189607] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:03.103 [2024-05-15 13:31:16.189955] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74474 ] 00:13:03.361 [2024-05-15 13:31:16.318007] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:03.361 [2024-05-15 13:31:16.335677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.361 [2024-05-15 13:31:16.383411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.361 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:03.362 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:03.362 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:03.362 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:03.362 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:03.362 13:31:16 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:04.733 00:13:04.733 real 0m1.402s 00:13:04.733 user 0m1.207s 00:13:04.733 sys 0m0.103s 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:04.733 ************************************ 00:13:04.733 END TEST accel_crc32c_C2 00:13:04.733 ************************************ 00:13:04.733 13:31:17 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:13:04.733 13:31:17 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:13:04.733 13:31:17 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:04.733 13:31:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:04.733 13:31:17 accel -- common/autotest_common.sh@10 -- # set +x 00:13:04.733 ************************************ 00:13:04.733 START TEST accel_copy 00:13:04.733 ************************************ 00:13:04.733 13:31:17 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:13:04.733 13:31:17 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:13:04.733 13:31:17 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:13:04.733 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:04.733 13:31:17 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:13:04.733 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:04.733 13:31:17 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:13:04.733 13:31:17 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:13:04.733 13:31:17 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:04.733 13:31:17 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:04.733 13:31:17 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:04.733 13:31:17 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:04.733 13:31:17 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:04.733 13:31:17 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:13:04.733 13:31:17 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:13:04.733 [2024-05-15 13:31:17.644830] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:04.733 [2024-05-15 13:31:17.645090] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74509 ] 00:13:04.733 [2024-05-15 13:31:17.764983] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:04.733 [2024-05-15 13:31:17.787015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.992 [2024-05-15 13:31:17.851882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:04.992 13:31:17 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:13:06.366 13:31:19 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:06.366 00:13:06.366 real 0m1.424s 00:13:06.366 user 0m1.218s 00:13:06.366 sys 0m0.112s 00:13:06.366 13:31:19 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:06.366 13:31:19 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:13:06.366 ************************************ 00:13:06.366 END TEST accel_copy 00:13:06.366 ************************************ 00:13:06.366 13:31:19 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:06.366 13:31:19 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:06.366 13:31:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:06.366 13:31:19 accel -- common/autotest_common.sh@10 -- # set +x 00:13:06.366 ************************************ 00:13:06.366 START TEST accel_fill 00:13:06.366 ************************************ 00:13:06.366 13:31:19 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:06.366 13:31:19 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:13:06.366 13:31:19 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:13:06.366 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:06.366 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:06.366 13:31:19 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:13:06.367 [2024-05-15 13:31:19.132544] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:06.367 [2024-05-15 13:31:19.132861] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74538 ] 00:13:06.367 [2024-05-15 13:31:19.262425] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:06.367 [2024-05-15 13:31:19.281417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.367 [2024-05-15 13:31:19.340420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:06.367 13:31:19 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:07.738 ************************************ 00:13:07.738 END TEST accel_fill 00:13:07.738 ************************************ 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:13:07.738 13:31:20 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:07.738 00:13:07.738 real 0m1.420s 00:13:07.738 user 0m1.209s 00:13:07.738 sys 0m0.114s 00:13:07.738 13:31:20 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:07.738 13:31:20 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:13:07.738 13:31:20 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:13:07.738 13:31:20 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:07.738 13:31:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:07.738 13:31:20 accel -- common/autotest_common.sh@10 -- # set +x 00:13:07.738 ************************************ 00:13:07.738 START TEST accel_copy_crc32c 00:13:07.738 ************************************ 00:13:07.738 13:31:20 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:13:07.738 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:13:07.738 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:13:07.738 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:13:07.739 [2024-05-15 13:31:20.591202] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:07.739 [2024-05-15 13:31:20.591532] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74578 ] 00:13:07.739 [2024-05-15 13:31:20.711482] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:07.739 [2024-05-15 13:31:20.726183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.739 [2024-05-15 13:31:20.778452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:07.739 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:07.998 13:31:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:08.932 ************************************ 00:13:08.932 END TEST accel_copy_crc32c 00:13:08.932 ************************************ 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:08.932 00:13:08.932 real 0m1.392s 00:13:08.932 user 0m1.188s 00:13:08.932 sys 0m0.109s 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:08.932 13:31:21 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:13:08.932 13:31:21 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:13:08.932 13:31:21 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:13:08.932 13:31:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:08.932 13:31:21 accel -- common/autotest_common.sh@10 -- # set +x 00:13:08.932 ************************************ 00:13:08.932 START TEST accel_copy_crc32c_C2 00:13:08.932 ************************************ 00:13:08.932 13:31:22 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:13:08.932 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:13:08.932 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:13:08.932 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:08.932 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:13:08.932 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:08.932 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:13:08.932 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:13:08.932 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:08.932 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:08.932 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:08.932 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:08.932 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:08.932 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:13:08.932 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:13:09.189 [2024-05-15 13:31:22.031664] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:09.189 [2024-05-15 13:31:22.032073] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74607 ] 00:13:09.189 [2024-05-15 13:31:22.159619] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:09.189 [2024-05-15 13:31:22.173291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.189 [2024-05-15 13:31:22.247392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:09.448 13:31:22 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:10.382 00:13:10.382 real 0m1.439s 00:13:10.382 user 0m1.220s 00:13:10.382 sys 0m0.119s 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:10.382 13:31:23 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:13:10.382 ************************************ 00:13:10.382 END TEST accel_copy_crc32c_C2 00:13:10.382 ************************************ 00:13:10.640 13:31:23 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:13:10.640 13:31:23 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:10.640 13:31:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:10.640 13:31:23 accel -- common/autotest_common.sh@10 -- # set +x 00:13:10.640 ************************************ 00:13:10.640 START TEST accel_dualcast 00:13:10.640 ************************************ 00:13:10.640 13:31:23 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:13:10.640 13:31:23 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:13:10.640 13:31:23 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:13:10.640 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:10.640 13:31:23 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:13:10.640 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:10.640 13:31:23 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:13:10.640 13:31:23 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:13:10.640 13:31:23 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:10.640 13:31:23 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:10.640 13:31:23 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:10.640 13:31:23 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:10.640 13:31:23 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:10.640 13:31:23 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:13:10.640 13:31:23 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:13:10.640 [2024-05-15 13:31:23.519697] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:10.640 [2024-05-15 13:31:23.519951] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74646 ] 00:13:10.640 [2024-05-15 13:31:23.648814] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:10.640 [2024-05-15 13:31:23.669056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.640 [2024-05-15 13:31:23.721602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:10.898 13:31:23 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:13:11.831 13:31:24 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:11.831 00:13:11.831 real 0m1.410s 00:13:11.831 user 0m1.205s 00:13:11.831 sys 0m0.111s 00:13:11.831 13:31:24 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:11.831 13:31:24 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:13:11.831 ************************************ 00:13:11.831 END TEST accel_dualcast 00:13:11.831 ************************************ 00:13:12.089 13:31:24 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:13:12.089 13:31:24 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:12.089 13:31:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:12.089 13:31:24 accel -- common/autotest_common.sh@10 -- # set +x 00:13:12.089 ************************************ 00:13:12.089 START TEST accel_compare 00:13:12.089 ************************************ 00:13:12.089 13:31:24 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:13:12.089 13:31:24 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:13:12.089 13:31:24 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:13:12.089 13:31:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:12.089 13:31:24 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:13:12.089 13:31:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:12.089 13:31:24 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:13:12.089 13:31:24 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:13:12.089 13:31:24 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:12.089 13:31:24 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:12.089 13:31:24 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:12.089 13:31:24 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:12.089 13:31:24 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:12.089 13:31:24 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:13:12.089 13:31:24 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:13:12.089 [2024-05-15 13:31:24.979711] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:12.089 [2024-05-15 13:31:24.980112] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74676 ] 00:13:12.089 [2024-05-15 13:31:25.113520] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:12.089 [2024-05-15 13:31:25.128729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.348 [2024-05-15 13:31:25.200506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:12.348 13:31:25 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:12.349 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:12.349 13:31:25 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:13:13.724 13:31:26 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:13.724 ************************************ 00:13:13.724 END TEST accel_compare 00:13:13.724 ************************************ 00:13:13.724 00:13:13.724 real 0m1.436s 00:13:13.724 user 0m1.216s 00:13:13.724 sys 0m0.124s 00:13:13.724 13:31:26 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:13.724 13:31:26 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:13:13.724 13:31:26 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:13:13.724 13:31:26 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:13.724 13:31:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:13.724 13:31:26 accel -- common/autotest_common.sh@10 -- # set +x 00:13:13.724 ************************************ 00:13:13.724 START TEST accel_xor 00:13:13.724 ************************************ 00:13:13.724 13:31:26 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:13:13.724 [2024-05-15 13:31:26.464688] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:13.724 [2024-05-15 13:31:26.464993] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74711 ] 00:13:13.724 [2024-05-15 13:31:26.585650] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:13.724 [2024-05-15 13:31:26.602352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.724 [2024-05-15 13:31:26.668945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:13.724 13:31:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:13:15.104 13:31:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:15.104 00:13:15.104 real 0m1.419s 00:13:15.104 user 0m1.202s 00:13:15.104 sys 0m0.115s 00:13:15.104 ************************************ 00:13:15.105 END TEST accel_xor 00:13:15.105 ************************************ 00:13:15.105 13:31:27 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:15.105 13:31:27 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:13:15.105 13:31:27 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:13:15.105 13:31:27 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:13:15.105 13:31:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:15.105 13:31:27 accel -- common/autotest_common.sh@10 -- # set +x 00:13:15.105 ************************************ 00:13:15.105 START TEST accel_xor 00:13:15.105 ************************************ 00:13:15.105 13:31:27 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:13:15.105 13:31:27 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:13:15.105 13:31:27 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:13:15.105 13:31:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:15.105 13:31:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:15.105 13:31:27 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:13:15.105 13:31:27 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:13:15.105 13:31:27 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:13:15.105 13:31:27 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:15.105 13:31:27 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:15.105 13:31:27 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:15.105 13:31:27 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:15.105 13:31:27 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:15.105 13:31:27 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:13:15.105 13:31:27 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:13:15.105 [2024-05-15 13:31:27.935934] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:15.105 [2024-05-15 13:31:27.936314] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74745 ] 00:13:15.105 [2024-05-15 13:31:28.064014] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:15.105 [2024-05-15 13:31:28.076657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.105 [2024-05-15 13:31:28.130648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:15.105 13:31:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:13:16.542 13:31:29 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:16.542 00:13:16.542 real 0m1.406s 00:13:16.542 user 0m1.188s 00:13:16.542 sys 0m0.105s 00:13:16.542 13:31:29 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:16.542 13:31:29 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:13:16.542 ************************************ 00:13:16.542 END TEST accel_xor 00:13:16.542 ************************************ 00:13:16.542 13:31:29 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:13:16.542 13:31:29 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:13:16.542 13:31:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:16.542 13:31:29 accel -- common/autotest_common.sh@10 -- # set +x 00:13:16.542 ************************************ 00:13:16.542 START TEST accel_dif_verify 00:13:16.542 ************************************ 00:13:16.542 13:31:29 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:13:16.542 13:31:29 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:13:16.542 13:31:29 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:13:16.543 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:16.543 13:31:29 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:13:16.543 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:16.543 13:31:29 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:13:16.543 13:31:29 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:13:16.543 13:31:29 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:16.543 13:31:29 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:16.543 13:31:29 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:16.543 13:31:29 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:16.543 13:31:29 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:16.543 13:31:29 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:13:16.543 13:31:29 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:13:16.543 [2024-05-15 13:31:29.384600] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:16.543 [2024-05-15 13:31:29.384980] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74774 ] 00:13:16.543 [2024-05-15 13:31:29.511598] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:16.543 [2024-05-15 13:31:29.526855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.543 [2024-05-15 13:31:29.582974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:16.801 13:31:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:17.734 ************************************ 00:13:17.734 END TEST accel_dif_verify 00:13:17.734 ************************************ 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:13:17.734 13:31:30 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:17.734 00:13:17.734 real 0m1.406s 00:13:17.734 user 0m1.205s 00:13:17.734 sys 0m0.110s 00:13:17.734 13:31:30 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:17.734 13:31:30 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:13:17.734 13:31:30 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:13:17.734 13:31:30 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:13:17.734 13:31:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:17.734 13:31:30 accel -- common/autotest_common.sh@10 -- # set +x 00:13:17.734 ************************************ 00:13:17.734 START TEST accel_dif_generate 00:13:17.734 ************************************ 00:13:17.734 13:31:30 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:13:17.734 13:31:30 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:13:17.734 13:31:30 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:13:17.734 13:31:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:17.734 13:31:30 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:13:17.734 13:31:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:17.734 13:31:30 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:13:17.734 13:31:30 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:13:17.734 13:31:30 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:17.734 13:31:30 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:17.734 13:31:30 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:17.734 13:31:30 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:17.734 13:31:30 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:17.734 13:31:30 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:13:17.734 13:31:30 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:13:17.992 [2024-05-15 13:31:30.842634] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:17.992 [2024-05-15 13:31:30.843095] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74809 ] 00:13:17.992 [2024-05-15 13:31:30.974768] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:17.992 [2024-05-15 13:31:30.992676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.992 [2024-05-15 13:31:31.051205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.250 13:31:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:18.250 13:31:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:18.250 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:18.250 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:18.250 13:31:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:18.250 13:31:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:18.250 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:18.251 13:31:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:19.186 ************************************ 00:13:19.186 END TEST accel_dif_generate 00:13:19.186 ************************************ 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:13:19.186 13:31:32 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:19.186 00:13:19.186 real 0m1.428s 00:13:19.186 user 0m1.208s 00:13:19.186 sys 0m0.123s 00:13:19.186 13:31:32 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:19.186 13:31:32 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:13:19.186 13:31:32 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:13:19.186 13:31:32 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:13:19.186 13:31:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:19.186 13:31:32 accel -- common/autotest_common.sh@10 -- # set +x 00:13:19.445 ************************************ 00:13:19.445 START TEST accel_dif_generate_copy 00:13:19.445 ************************************ 00:13:19.445 13:31:32 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:13:19.445 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:13:19.445 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:13:19.445 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:19.445 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:19.445 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:13:19.445 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:13:19.445 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:13:19.445 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:19.445 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:19.445 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:19.445 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:19.445 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:19.445 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:13:19.445 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:13:19.445 [2024-05-15 13:31:32.313204] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:19.445 [2024-05-15 13:31:32.313539] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74843 ] 00:13:19.445 [2024-05-15 13:31:32.434084] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:19.445 [2024-05-15 13:31:32.452571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.445 [2024-05-15 13:31:32.511399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:19.703 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:19.704 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:19.704 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:13:19.704 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:19.704 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:19.704 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:19.704 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:19.704 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:19.704 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:19.704 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:19.704 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:19.704 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:19.704 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:19.704 13:31:32 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:20.640 00:13:20.640 real 0m1.422s 00:13:20.640 user 0m1.209s 00:13:20.640 sys 0m0.113s 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:20.640 13:31:33 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:13:20.640 ************************************ 00:13:20.640 END TEST accel_dif_generate_copy 00:13:20.640 ************************************ 00:13:20.899 13:31:33 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:13:20.899 13:31:33 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:20.899 13:31:33 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:13:20.899 13:31:33 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:20.899 13:31:33 accel -- common/autotest_common.sh@10 -- # set +x 00:13:20.899 ************************************ 00:13:20.899 START TEST accel_comp 00:13:20.899 ************************************ 00:13:20.899 13:31:33 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:20.899 13:31:33 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:13:20.899 13:31:33 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:13:20.899 13:31:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:20.899 13:31:33 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:20.899 13:31:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:20.899 13:31:33 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:20.899 13:31:33 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:13:20.899 13:31:33 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:20.899 13:31:33 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:20.899 13:31:33 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:20.899 13:31:33 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:20.899 13:31:33 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:20.899 13:31:33 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:13:20.899 13:31:33 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:13:20.899 [2024-05-15 13:31:33.778688] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:20.899 [2024-05-15 13:31:33.778962] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74878 ] 00:13:20.899 [2024-05-15 13:31:33.900512] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:20.899 [2024-05-15 13:31:33.914359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.899 [2024-05-15 13:31:33.991301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:21.157 13:31:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:22.091 13:31:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:22.091 13:31:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:22.091 13:31:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:22.091 13:31:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:22.350 13:31:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:22.350 13:31:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:22.350 13:31:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:22.350 13:31:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:22.350 13:31:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:22.350 13:31:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:22.350 13:31:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:22.350 13:31:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:22.350 13:31:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:22.350 13:31:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:22.350 13:31:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:22.350 13:31:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:22.350 13:31:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:22.350 13:31:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:22.350 13:31:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:22.350 13:31:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:22.350 13:31:35 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:22.350 13:31:35 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:22.350 13:31:35 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:22.350 13:31:35 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:22.350 ************************************ 00:13:22.350 END TEST accel_comp 00:13:22.350 ************************************ 00:13:22.350 13:31:35 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:22.350 13:31:35 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:13:22.350 13:31:35 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:22.350 00:13:22.350 real 0m1.437s 00:13:22.350 user 0m1.228s 00:13:22.350 sys 0m0.111s 00:13:22.350 13:31:35 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:22.350 13:31:35 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:13:22.350 13:31:35 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:22.350 13:31:35 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:13:22.350 13:31:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:22.350 13:31:35 accel -- common/autotest_common.sh@10 -- # set +x 00:13:22.350 ************************************ 00:13:22.350 START TEST accel_decomp 00:13:22.350 ************************************ 00:13:22.350 13:31:35 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:22.350 13:31:35 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:13:22.350 13:31:35 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:13:22.350 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:22.350 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:22.350 13:31:35 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:22.350 13:31:35 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:13:22.351 13:31:35 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:22.351 13:31:35 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:22.351 13:31:35 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:22.351 13:31:35 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:22.351 13:31:35 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:22.351 13:31:35 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:22.351 13:31:35 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:13:22.351 13:31:35 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:13:22.351 [2024-05-15 13:31:35.262081] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:22.351 [2024-05-15 13:31:35.262874] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74912 ] 00:13:22.351 [2024-05-15 13:31:35.386206] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:22.351 [2024-05-15 13:31:35.405407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.609 [2024-05-15 13:31:35.459083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:22.609 13:31:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:13:22.610 13:31:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:22.610 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:22.610 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:22.610 13:31:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:13:22.610 13:31:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:22.610 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:22.610 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:22.610 13:31:35 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:13:22.610 13:31:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:22.610 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:22.610 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:22.610 13:31:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:13:22.610 13:31:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:22.610 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:22.610 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:22.610 13:31:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:22.610 13:31:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:22.610 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:22.610 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:22.610 13:31:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:22.610 13:31:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:22.610 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:22.610 13:31:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:23.546 13:31:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:23.546 13:31:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:23.546 13:31:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:23.546 13:31:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:23.546 13:31:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:23.546 13:31:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:23.546 13:31:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:23.546 13:31:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:23.546 13:31:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:23.546 13:31:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:23.546 13:31:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:23.805 13:31:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:23.805 13:31:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:23.805 13:31:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:23.805 13:31:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:23.805 13:31:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:23.805 13:31:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:23.805 13:31:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:23.805 13:31:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:23.805 13:31:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:23.805 13:31:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:23.805 13:31:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:23.805 13:31:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:23.805 13:31:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:23.805 13:31:36 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:23.805 13:31:36 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:23.805 13:31:36 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:23.805 00:13:23.805 real 0m1.410s 00:13:23.805 user 0m1.202s 00:13:23.805 sys 0m0.109s 00:13:23.805 13:31:36 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:23.805 13:31:36 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:13:23.805 ************************************ 00:13:23.805 END TEST accel_decomp 00:13:23.805 ************************************ 00:13:23.805 13:31:36 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:23.805 13:31:36 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:13:23.805 13:31:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:23.805 13:31:36 accel -- common/autotest_common.sh@10 -- # set +x 00:13:23.805 ************************************ 00:13:23.805 START TEST accel_decmop_full 00:13:23.805 ************************************ 00:13:23.805 13:31:36 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:23.805 13:31:36 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:13:23.805 13:31:36 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:13:23.806 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:23.806 13:31:36 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:23.806 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:23.806 13:31:36 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:23.806 13:31:36 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:13:23.806 13:31:36 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:23.806 13:31:36 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:23.806 13:31:36 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:23.806 13:31:36 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:23.806 13:31:36 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:23.806 13:31:36 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:13:23.806 13:31:36 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:13:23.806 [2024-05-15 13:31:36.711679] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:23.806 [2024-05-15 13:31:36.711991] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74947 ] 00:13:23.806 [2024-05-15 13:31:36.831990] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:23.806 [2024-05-15 13:31:36.849515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.064 [2024-05-15 13:31:36.905611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:24.065 13:31:36 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:25.443 ************************************ 00:13:25.443 END TEST accel_decmop_full 00:13:25.443 ************************************ 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:25.443 13:31:38 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:25.443 00:13:25.443 real 0m1.423s 00:13:25.443 user 0m0.011s 00:13:25.443 sys 0m0.003s 00:13:25.443 13:31:38 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:25.443 13:31:38 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:13:25.443 13:31:38 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:25.443 13:31:38 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:13:25.443 13:31:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:25.443 13:31:38 accel -- common/autotest_common.sh@10 -- # set +x 00:13:25.443 ************************************ 00:13:25.443 START TEST accel_decomp_mcore 00:13:25.443 ************************************ 00:13:25.443 13:31:38 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:25.443 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:13:25.443 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:13:25.443 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:25.443 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:25.443 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:25.443 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:25.443 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:13:25.443 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:25.443 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:25.443 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:25.443 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:25.443 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:25.443 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:13:25.443 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:13:25.443 [2024-05-15 13:31:38.187520] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:25.443 [2024-05-15 13:31:38.187918] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74976 ] 00:13:25.443 [2024-05-15 13:31:38.315296] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:25.443 [2024-05-15 13:31:38.329035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:25.443 [2024-05-15 13:31:38.406402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.443 [2024-05-15 13:31:38.406559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.444 [2024-05-15 13:31:38.406483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.444 [2024-05-15 13:31:38.406555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:25.444 13:31:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:26.820 00:13:26.820 real 0m1.551s 00:13:26.820 user 0m4.823s 00:13:26.820 sys 0m0.136s 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:26.820 13:31:39 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:13:26.820 ************************************ 00:13:26.820 END TEST accel_decomp_mcore 00:13:26.820 ************************************ 00:13:26.821 13:31:39 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:26.821 13:31:39 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:26.821 13:31:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:26.821 13:31:39 accel -- common/autotest_common.sh@10 -- # set +x 00:13:26.821 ************************************ 00:13:26.821 START TEST accel_decomp_full_mcore 00:13:26.821 ************************************ 00:13:26.821 13:31:39 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:26.821 13:31:39 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:13:26.821 13:31:39 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:13:26.821 13:31:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:26.821 13:31:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:26.821 13:31:39 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:26.821 13:31:39 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:26.821 13:31:39 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:13:26.821 13:31:39 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:26.821 13:31:39 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:26.821 13:31:39 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:26.821 13:31:39 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:26.821 13:31:39 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:26.821 13:31:39 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:13:26.821 13:31:39 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:13:26.821 [2024-05-15 13:31:39.794079] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:26.821 [2024-05-15 13:31:39.794411] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75019 ] 00:13:27.080 [2024-05-15 13:31:39.922397] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:27.080 [2024-05-15 13:31:39.940182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:27.080 [2024-05-15 13:31:40.029913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.080 [2024-05-15 13:31:40.030121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.080 [2024-05-15 13:31:40.030120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:27.080 [2024-05-15 13:31:40.030024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:27.080 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:27.081 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:13:27.081 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:27.081 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:27.081 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:27.081 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:27.081 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:27.081 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:27.081 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:27.081 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:27.081 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:27.081 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:27.081 13:31:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:28.456 ************************************ 00:13:28.456 END TEST accel_decomp_full_mcore 00:13:28.456 ************************************ 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:28.456 00:13:28.456 real 0m1.625s 00:13:28.456 user 0m0.016s 00:13:28.456 sys 0m0.004s 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:28.456 13:31:41 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:13:28.456 13:31:41 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:28.456 13:31:41 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:13:28.456 13:31:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:28.456 13:31:41 accel -- common/autotest_common.sh@10 -- # set +x 00:13:28.456 ************************************ 00:13:28.456 START TEST accel_decomp_mthread 00:13:28.456 ************************************ 00:13:28.456 13:31:41 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:28.456 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:13:28.456 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:13:28.456 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:28.456 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:28.456 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:28.456 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:28.456 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:13:28.456 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:28.456 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:28.456 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:28.456 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:28.456 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:28.456 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:13:28.456 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:13:28.456 [2024-05-15 13:31:41.467117] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:28.456 [2024-05-15 13:31:41.467562] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75051 ] 00:13:28.714 [2024-05-15 13:31:41.594511] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:28.714 [2024-05-15 13:31:41.607745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.714 [2024-05-15 13:31:41.680140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:28.714 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:28.715 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:13:28.715 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:28.715 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:28.715 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:28.715 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:13:28.715 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:28.715 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:28.715 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:28.715 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:13:28.715 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:28.715 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:28.715 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:28.715 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:13:28.715 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:28.715 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:28.715 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:28.715 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:28.715 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:28.715 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:28.715 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:28.715 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:28.715 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:28.715 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:28.715 13:31:41 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.091 ************************************ 00:13:30.091 END TEST accel_decomp_mthread 00:13:30.091 ************************************ 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:30.091 00:13:30.091 real 0m1.439s 00:13:30.091 user 0m1.222s 00:13:30.091 sys 0m0.115s 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:30.091 13:31:42 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:13:30.091 13:31:42 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:30.091 13:31:42 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:30.091 13:31:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:30.091 13:31:42 accel -- common/autotest_common.sh@10 -- # set +x 00:13:30.091 ************************************ 00:13:30.091 START TEST accel_decomp_full_mthread 00:13:30.091 ************************************ 00:13:30.091 13:31:42 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:30.091 13:31:42 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:13:30.091 13:31:42 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:13:30.091 13:31:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.091 13:31:42 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.091 13:31:42 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:30.091 13:31:42 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:13:30.091 13:31:42 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:30.091 13:31:42 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:30.091 13:31:42 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:30.091 13:31:42 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:30.091 13:31:42 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:30.091 13:31:42 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:30.091 13:31:42 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:13:30.091 13:31:42 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:13:30.091 [2024-05-15 13:31:42.956977] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:30.091 [2024-05-15 13:31:42.957361] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75091 ] 00:13:30.091 [2024-05-15 13:31:43.084015] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:30.091 [2024-05-15 13:31:43.101365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.091 [2024-05-15 13:31:43.165798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.350 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:30.351 13:31:43 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:31.728 00:13:31.728 real 0m1.479s 00:13:31.728 user 0m1.254s 00:13:31.728 sys 0m0.125s 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:31.728 13:31:44 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:13:31.728 ************************************ 00:13:31.728 END TEST accel_decomp_full_mthread 00:13:31.728 ************************************ 00:13:31.728 13:31:44 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:13:31.728 13:31:44 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:31.728 13:31:44 accel -- accel/accel.sh@137 -- # build_accel_config 00:13:31.728 13:31:44 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:13:31.728 13:31:44 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:31.728 13:31:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:31.728 13:31:44 accel -- common/autotest_common.sh@10 -- # set +x 00:13:31.728 13:31:44 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:31.728 13:31:44 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:31.728 13:31:44 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:31.728 13:31:44 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:31.728 13:31:44 accel -- accel/accel.sh@40 -- # local IFS=, 00:13:31.728 13:31:44 accel -- accel/accel.sh@41 -- # jq -r . 00:13:31.728 ************************************ 00:13:31.728 START TEST accel_dif_functional_tests 00:13:31.728 ************************************ 00:13:31.728 13:31:44 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:31.728 [2024-05-15 13:31:44.516164] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:31.728 [2024-05-15 13:31:44.516617] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75121 ] 00:13:31.728 [2024-05-15 13:31:44.647403] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:31.728 [2024-05-15 13:31:44.665379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:31.728 [2024-05-15 13:31:44.722603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.728 [2024-05-15 13:31:44.722652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.728 [2024-05-15 13:31:44.722658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.728 00:13:31.728 00:13:31.728 CUnit - A unit testing framework for C - Version 2.1-3 00:13:31.728 http://cunit.sourceforge.net/ 00:13:31.728 00:13:31.728 00:13:31.728 Suite: accel_dif 00:13:31.728 Test: verify: DIF generated, GUARD check ...passed 00:13:31.729 Test: verify: DIF generated, APPTAG check ...passed 00:13:31.729 Test: verify: DIF generated, REFTAG check ...passed 00:13:31.729 Test: verify: DIF not generated, GUARD check ...[2024-05-15 13:31:44.800634] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:31.729 [2024-05-15 13:31:44.800832] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:31.729 passed 00:13:31.729 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 13:31:44.801023] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:31.729 [2024-05-15 13:31:44.801155] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:31.729 passed 00:13:31.729 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 13:31:44.801366] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:31.729 [2024-05-15 13:31:44.801581] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5apassed5a 00:13:31.729 00:13:31.729 Test: verify: APPTAG correct, APPTAG check ...passed 00:13:31.729 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 13:31:44.801920] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:13:31.729 passed 00:13:31.729 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:13:31.729 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:13:31.729 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:13:31.729 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:13:31.729 Test: generate copy: DIF generated, GUARD check ...[2024-05-15 13:31:44.802341] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:13:31.729 passed 00:13:31.729 Test: generate copy: DIF generated, APTTAG check ...passed 00:13:31.729 Test: generate copy: DIF generated, REFTAG check ...passed 00:13:31.729 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:13:31.729 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:13:31.729 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:13:31.729 Test: generate copy: iovecs-len validate ...[2024-05-15 13:31:44.803478] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:13:31.729 passed 00:13:31.729 Test: generate copy: buffer alignment validate ...passed 00:13:31.729 00:13:31.729 Run Summary: Type Total Ran Passed Failed Inactive 00:13:31.729 suites 1 1 n/a 0 0 00:13:31.729 tests 20 20 20 0 0 00:13:31.729 asserts 204 204 204 0 n/a 00:13:31.729 00:13:31.729 Elapsed time = 0.008 seconds 00:13:31.987 ************************************ 00:13:31.987 END TEST accel_dif_functional_tests 00:13:31.987 ************************************ 00:13:31.987 00:13:31.987 real 0m0.523s 00:13:31.987 user 0m0.641s 00:13:31.987 sys 0m0.153s 00:13:31.987 13:31:44 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:31.987 13:31:44 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:13:31.987 00:13:31.987 real 0m33.096s 00:13:31.987 user 0m34.986s 00:13:31.987 sys 0m3.976s 00:13:31.987 ************************************ 00:13:31.987 END TEST accel 00:13:31.987 ************************************ 00:13:31.987 13:31:45 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:31.987 13:31:45 accel -- common/autotest_common.sh@10 -- # set +x 00:13:31.987 13:31:45 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:31.987 13:31:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:31.987 13:31:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:31.987 13:31:45 -- common/autotest_common.sh@10 -- # set +x 00:13:31.987 ************************************ 00:13:31.987 START TEST accel_rpc 00:13:31.987 ************************************ 00:13:31.987 13:31:45 accel_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:32.246 * Looking for test storage... 00:13:32.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:13:32.246 13:31:45 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:32.246 13:31:45 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=75191 00:13:32.246 13:31:45 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:13:32.246 13:31:45 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 75191 00:13:32.246 13:31:45 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 75191 ']' 00:13:32.246 13:31:45 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.246 13:31:45 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:32.246 13:31:45 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.246 13:31:45 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:32.246 13:31:45 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.246 [2024-05-15 13:31:45.229155] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:32.246 [2024-05-15 13:31:45.229529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75191 ] 00:13:32.504 [2024-05-15 13:31:45.355984] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:32.504 [2024-05-15 13:31:45.369474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.504 [2024-05-15 13:31:45.433931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.073 13:31:46 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:33.073 13:31:46 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:13:33.073 13:31:46 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:13:33.073 13:31:46 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:13:33.073 13:31:46 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:13:33.073 13:31:46 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:13:33.073 13:31:46 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:13:33.073 13:31:46 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:33.073 13:31:46 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:33.073 13:31:46 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.073 ************************************ 00:13:33.073 START TEST accel_assign_opcode 00:13:33.073 ************************************ 00:13:33.073 13:31:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:13:33.073 13:31:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:13:33.073 13:31:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.073 13:31:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:33.073 [2024-05-15 13:31:46.138791] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:13:33.073 13:31:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.073 13:31:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:13:33.073 13:31:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.073 13:31:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:33.073 [2024-05-15 13:31:46.146777] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:13:33.073 13:31:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.073 13:31:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:13:33.073 13:31:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.073 13:31:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:33.332 13:31:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.332 13:31:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:13:33.332 13:31:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:13:33.332 13:31:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.332 13:31:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:33.332 13:31:46 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:13:33.332 13:31:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.332 software 00:13:33.332 00:13:33.332 real 0m0.242s 00:13:33.332 user 0m0.044s 00:13:33.332 sys 0m0.010s 00:13:33.332 13:31:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:33.332 13:31:46 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:33.332 ************************************ 00:13:33.332 END TEST accel_assign_opcode 00:13:33.332 ************************************ 00:13:33.332 13:31:46 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 75191 00:13:33.332 13:31:46 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 75191 ']' 00:13:33.332 13:31:46 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 75191 00:13:33.332 13:31:46 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:13:33.332 13:31:46 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:33.332 13:31:46 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75191 00:13:33.592 13:31:46 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:33.592 13:31:46 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:33.592 13:31:46 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75191' 00:13:33.592 killing process with pid 75191 00:13:33.592 13:31:46 accel_rpc -- common/autotest_common.sh@965 -- # kill 75191 00:13:33.592 13:31:46 accel_rpc -- common/autotest_common.sh@970 -- # wait 75191 00:13:33.850 00:13:33.850 real 0m1.703s 00:13:33.850 user 0m1.718s 00:13:33.850 sys 0m0.428s 00:13:33.850 13:31:46 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:33.850 13:31:46 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.850 ************************************ 00:13:33.850 END TEST accel_rpc 00:13:33.850 ************************************ 00:13:33.850 13:31:46 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:33.850 13:31:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:33.850 13:31:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:33.850 13:31:46 -- common/autotest_common.sh@10 -- # set +x 00:13:33.850 ************************************ 00:13:33.850 START TEST app_cmdline 00:13:33.850 ************************************ 00:13:33.850 13:31:46 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:33.850 * Looking for test storage... 00:13:33.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:33.850 13:31:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:33.850 13:31:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=75284 00:13:33.850 13:31:46 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:33.850 13:31:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 75284 00:13:33.850 13:31:46 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 75284 ']' 00:13:33.850 13:31:46 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.850 13:31:46 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:33.850 13:31:46 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.850 13:31:46 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:33.850 13:31:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:34.108 [2024-05-15 13:31:46.972559] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:34.108 [2024-05-15 13:31:46.972847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75284 ] 00:13:34.108 [2024-05-15 13:31:47.093956] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:34.108 [2024-05-15 13:31:47.105479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.108 [2024-05-15 13:31:47.158234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.374 13:31:47 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:34.374 13:31:47 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:13:34.374 13:31:47 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:13:34.681 { 00:13:34.681 "version": "SPDK v24.05-pre git sha1 253cca4fc", 00:13:34.681 "fields": { 00:13:34.681 "major": 24, 00:13:34.681 "minor": 5, 00:13:34.681 "patch": 0, 00:13:34.681 "suffix": "-pre", 00:13:34.681 "commit": "253cca4fc" 00:13:34.681 } 00:13:34.681 } 00:13:34.681 13:31:47 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:13:34.681 13:31:47 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:34.681 13:31:47 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:34.681 13:31:47 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:34.681 13:31:47 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:34.681 13:31:47 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.681 13:31:47 app_cmdline -- app/cmdline.sh@26 -- # sort 00:13:34.681 13:31:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:34.681 13:31:47 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:34.681 13:31:47 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.681 13:31:47 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:34.681 13:31:47 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:34.681 13:31:47 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:34.681 13:31:47 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:13:34.681 13:31:47 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:34.681 13:31:47 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:34.681 13:31:47 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:34.681 13:31:47 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:34.681 13:31:47 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:34.681 13:31:47 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:34.681 13:31:47 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:34.681 13:31:47 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:34.681 13:31:47 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:34.681 13:31:47 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:34.946 request: 00:13:34.946 { 00:13:34.946 "method": "env_dpdk_get_mem_stats", 00:13:34.946 "req_id": 1 00:13:34.946 } 00:13:34.946 Got JSON-RPC error response 00:13:34.946 response: 00:13:34.946 { 00:13:34.946 "code": -32601, 00:13:34.946 "message": "Method not found" 00:13:34.946 } 00:13:34.946 13:31:48 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:13:34.946 13:31:48 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:34.946 13:31:48 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:34.946 13:31:48 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:34.946 13:31:48 app_cmdline -- app/cmdline.sh@1 -- # killprocess 75284 00:13:34.946 13:31:48 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 75284 ']' 00:13:34.946 13:31:48 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 75284 00:13:34.946 13:31:48 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:13:34.946 13:31:48 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:34.946 13:31:48 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75284 00:13:35.206 13:31:48 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:35.206 13:31:48 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:35.206 13:31:48 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75284' 00:13:35.206 killing process with pid 75284 00:13:35.206 13:31:48 app_cmdline -- common/autotest_common.sh@965 -- # kill 75284 00:13:35.206 13:31:48 app_cmdline -- common/autotest_common.sh@970 -- # wait 75284 00:13:35.464 ************************************ 00:13:35.464 END TEST app_cmdline 00:13:35.464 ************************************ 00:13:35.464 00:13:35.464 real 0m1.563s 00:13:35.464 user 0m1.983s 00:13:35.464 sys 0m0.415s 00:13:35.465 13:31:48 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:35.465 13:31:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:35.465 13:31:48 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:35.465 13:31:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:35.465 13:31:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:35.465 13:31:48 -- common/autotest_common.sh@10 -- # set +x 00:13:35.465 ************************************ 00:13:35.465 START TEST version 00:13:35.465 ************************************ 00:13:35.465 13:31:48 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:35.465 * Looking for test storage... 00:13:35.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:35.465 13:31:48 version -- app/version.sh@17 -- # get_header_version major 00:13:35.465 13:31:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:35.465 13:31:48 version -- app/version.sh@14 -- # cut -f2 00:13:35.465 13:31:48 version -- app/version.sh@14 -- # tr -d '"' 00:13:35.465 13:31:48 version -- app/version.sh@17 -- # major=24 00:13:35.465 13:31:48 version -- app/version.sh@18 -- # get_header_version minor 00:13:35.465 13:31:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:35.465 13:31:48 version -- app/version.sh@14 -- # cut -f2 00:13:35.465 13:31:48 version -- app/version.sh@14 -- # tr -d '"' 00:13:35.465 13:31:48 version -- app/version.sh@18 -- # minor=5 00:13:35.465 13:31:48 version -- app/version.sh@19 -- # get_header_version patch 00:13:35.465 13:31:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:35.465 13:31:48 version -- app/version.sh@14 -- # cut -f2 00:13:35.465 13:31:48 version -- app/version.sh@14 -- # tr -d '"' 00:13:35.724 13:31:48 version -- app/version.sh@19 -- # patch=0 00:13:35.724 13:31:48 version -- app/version.sh@20 -- # get_header_version suffix 00:13:35.724 13:31:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:35.724 13:31:48 version -- app/version.sh@14 -- # cut -f2 00:13:35.724 13:31:48 version -- app/version.sh@14 -- # tr -d '"' 00:13:35.724 13:31:48 version -- app/version.sh@20 -- # suffix=-pre 00:13:35.724 13:31:48 version -- app/version.sh@22 -- # version=24.5 00:13:35.724 13:31:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:13:35.724 13:31:48 version -- app/version.sh@28 -- # version=24.5rc0 00:13:35.724 13:31:48 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:35.724 13:31:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:35.724 13:31:48 version -- app/version.sh@30 -- # py_version=24.5rc0 00:13:35.724 13:31:48 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:13:35.724 00:13:35.724 real 0m0.173s 00:13:35.724 user 0m0.095s 00:13:35.724 sys 0m0.110s 00:13:35.724 ************************************ 00:13:35.724 END TEST version 00:13:35.724 ************************************ 00:13:35.724 13:31:48 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:35.724 13:31:48 version -- common/autotest_common.sh@10 -- # set +x 00:13:35.724 13:31:48 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:13:35.724 13:31:48 -- spdk/autotest.sh@194 -- # uname -s 00:13:35.724 13:31:48 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:13:35.724 13:31:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:13:35.724 13:31:48 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:13:35.724 13:31:48 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:13:35.724 13:31:48 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:13:35.724 13:31:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:35.724 13:31:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:35.724 13:31:48 -- common/autotest_common.sh@10 -- # set +x 00:13:35.724 ************************************ 00:13:35.724 START TEST spdk_dd 00:13:35.724 ************************************ 00:13:35.724 13:31:48 spdk_dd -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:13:35.724 * Looking for test storage... 00:13:35.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:13:35.724 13:31:48 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:35.724 13:31:48 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.724 13:31:48 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.724 13:31:48 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.724 13:31:48 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.724 13:31:48 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.724 13:31:48 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.724 13:31:48 spdk_dd -- paths/export.sh@5 -- # export PATH 00:13:35.724 13:31:48 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.724 13:31:48 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:36.292 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:36.292 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:36.292 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:36.292 13:31:49 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:13:36.292 13:31:49 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@230 -- # local class 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@232 -- # local progif 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@233 -- # class=01 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@15 -- # local i 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@24 -- # return 0 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@15 -- # local i 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@24 -- # return 0 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:13:36.292 13:31:49 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:13:36.292 13:31:49 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@139 -- # local lib so 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.13.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.9.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:13:36.293 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:13:36.294 * spdk_dd linked to liburing 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:13:36.294 13:31:49 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:13:36.294 13:31:49 spdk_dd -- dd/common.sh@157 -- # return 0 00:13:36.294 13:31:49 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:13:36.294 13:31:49 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:13:36.294 13:31:49 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:13:36.294 13:31:49 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:36.294 13:31:49 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:13:36.294 ************************************ 00:13:36.294 START TEST spdk_dd_basic_rw 00:13:36.294 ************************************ 00:13:36.294 13:31:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:13:36.553 * Looking for test storage... 00:13:36.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:13:36.553 13:31:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:36.553 13:31:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.553 13:31:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.553 13:31:49 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.553 13:31:49 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.553 13:31:49 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.553 13:31:49 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.553 13:31:49 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:13:36.553 13:31:49 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.553 13:31:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:13:36.553 13:31:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:13:36.553 13:31:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:13:36.553 13:31:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:13:36.553 13:31:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:13:36.553 13:31:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:13:36.553 13:31:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:13:36.553 13:31:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:36.553 13:31:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:36.553 13:31:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:13:36.553 13:31:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:13:36.553 13:31:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:13:36.553 13:31:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:13:36.813 13:31:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:13:36.813 13:31:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:13:36.813 13:31:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:13:36.813 13:31:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:13:36.813 13:31:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:13:36.813 13:31:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:13:36.813 13:31:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:13:36.813 13:31:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:13:36.813 13:31:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:13:36.813 13:31:49 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:13:36.813 13:31:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:13:36.813 13:31:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:13:36.813 13:31:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:36.813 13:31:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:13:36.813 ************************************ 00:13:36.813 START TEST dd_bs_lt_native_bs 00:13:36.813 ************************************ 00:13:36.813 13:31:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1121 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:13:36.813 13:31:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:13:36.813 13:31:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:13:36.814 13:31:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:36.814 13:31:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.814 13:31:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:36.814 13:31:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.814 13:31:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:36.814 13:31:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.814 13:31:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:36.814 13:31:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:36.814 13:31:49 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:13:36.814 { 00:13:36.814 "subsystems": [ 00:13:36.814 { 00:13:36.814 "subsystem": "bdev", 00:13:36.814 "config": [ 00:13:36.814 { 00:13:36.814 "params": { 00:13:36.814 "trtype": "pcie", 00:13:36.814 "traddr": "0000:00:10.0", 00:13:36.814 "name": "Nvme0" 00:13:36.814 }, 00:13:36.814 "method": "bdev_nvme_attach_controller" 00:13:36.814 }, 00:13:36.814 { 00:13:36.814 "method": "bdev_wait_for_examine" 00:13:36.814 } 00:13:36.814 ] 00:13:36.814 } 00:13:36.814 ] 00:13:36.814 } 00:13:36.814 [2024-05-15 13:31:49.780014] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:36.814 [2024-05-15 13:31:49.780470] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75591 ] 00:13:37.070 [2024-05-15 13:31:49.919635] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:37.070 [2024-05-15 13:31:49.936495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.071 [2024-05-15 13:31:50.001344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.071 [2024-05-15 13:31:50.148289] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:13:37.071 [2024-05-15 13:31:50.148603] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:37.328 [2024-05-15 13:31:50.259930] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:13:37.328 ************************************ 00:13:37.328 END TEST dd_bs_lt_native_bs 00:13:37.328 ************************************ 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:37.328 00:13:37.328 real 0m0.658s 00:13:37.328 user 0m0.429s 00:13:37.328 sys 0m0.170s 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:13:37.328 ************************************ 00:13:37.328 START TEST dd_rw 00:13:37.328 ************************************ 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1121 -- # basic_rw 4096 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:13:37.328 13:31:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:38.296 13:31:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:13:38.297 13:31:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:13:38.297 13:31:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:13:38.297 13:31:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:38.297 { 00:13:38.297 "subsystems": [ 00:13:38.297 { 00:13:38.297 "subsystem": "bdev", 00:13:38.297 "config": [ 00:13:38.297 { 00:13:38.297 "params": { 00:13:38.297 "trtype": "pcie", 00:13:38.297 "traddr": "0000:00:10.0", 00:13:38.297 "name": "Nvme0" 00:13:38.297 }, 00:13:38.297 "method": "bdev_nvme_attach_controller" 00:13:38.297 }, 00:13:38.297 { 00:13:38.297 "method": "bdev_wait_for_examine" 00:13:38.297 } 00:13:38.297 ] 00:13:38.297 } 00:13:38.297 ] 00:13:38.297 } 00:13:38.297 [2024-05-15 13:31:51.237759] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:38.297 [2024-05-15 13:31:51.238603] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75633 ] 00:13:38.297 [2024-05-15 13:31:51.366772] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:38.297 [2024-05-15 13:31:51.386638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.554 [2024-05-15 13:31:51.452946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.811  Copying: 60/60 [kB] (average 29 MBps) 00:13:38.811 00:13:38.811 13:31:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:13:38.811 13:31:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:13:38.811 13:31:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:13:38.811 13:31:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:38.811 [2024-05-15 13:31:51.841497] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:38.812 [2024-05-15 13:31:51.841899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75641 ] 00:13:38.812 { 00:13:38.812 "subsystems": [ 00:13:38.812 { 00:13:38.812 "subsystem": "bdev", 00:13:38.812 "config": [ 00:13:38.812 { 00:13:38.812 "params": { 00:13:38.812 "trtype": "pcie", 00:13:38.812 "traddr": "0000:00:10.0", 00:13:38.812 "name": "Nvme0" 00:13:38.812 }, 00:13:38.812 "method": "bdev_nvme_attach_controller" 00:13:38.812 }, 00:13:38.812 { 00:13:38.812 "method": "bdev_wait_for_examine" 00:13:38.812 } 00:13:38.812 ] 00:13:38.812 } 00:13:38.812 ] 00:13:38.812 } 00:13:39.069 [2024-05-15 13:31:51.969654] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:39.069 [2024-05-15 13:31:51.989129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.069 [2024-05-15 13:31:52.043136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.328  Copying: 60/60 [kB] (average 29 MBps) 00:13:39.328 00:13:39.328 13:31:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:39.328 13:31:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:13:39.328 13:31:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:13:39.328 13:31:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:13:39.328 13:31:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:13:39.328 13:31:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:13:39.328 13:31:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:13:39.328 13:31:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:13:39.328 13:31:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:13:39.328 13:31:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:13:39.328 13:31:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:39.328 [2024-05-15 13:31:52.414800] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:39.328 [2024-05-15 13:31:52.415123] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75662 ] 00:13:39.328 { 00:13:39.328 "subsystems": [ 00:13:39.328 { 00:13:39.328 "subsystem": "bdev", 00:13:39.328 "config": [ 00:13:39.328 { 00:13:39.328 "params": { 00:13:39.328 "trtype": "pcie", 00:13:39.328 "traddr": "0000:00:10.0", 00:13:39.328 "name": "Nvme0" 00:13:39.328 }, 00:13:39.328 "method": "bdev_nvme_attach_controller" 00:13:39.328 }, 00:13:39.328 { 00:13:39.328 "method": "bdev_wait_for_examine" 00:13:39.328 } 00:13:39.328 ] 00:13:39.328 } 00:13:39.328 ] 00:13:39.328 } 00:13:39.586 [2024-05-15 13:31:52.541573] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:39.586 [2024-05-15 13:31:52.557056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.586 [2024-05-15 13:31:52.617644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.120  Copying: 1024/1024 [kB] (average 500 MBps) 00:13:40.120 00:13:40.120 13:31:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:13:40.120 13:31:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:13:40.120 13:31:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:13:40.120 13:31:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:13:40.120 13:31:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:13:40.120 13:31:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:13:40.120 13:31:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:40.684 13:31:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:13:40.684 13:31:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:13:40.684 13:31:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:13:40.684 13:31:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:40.684 [2024-05-15 13:31:53.649763] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:40.684 [2024-05-15 13:31:53.650867] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75681 ] 00:13:40.684 { 00:13:40.684 "subsystems": [ 00:13:40.684 { 00:13:40.684 "subsystem": "bdev", 00:13:40.684 "config": [ 00:13:40.684 { 00:13:40.684 "params": { 00:13:40.684 "trtype": "pcie", 00:13:40.684 "traddr": "0000:00:10.0", 00:13:40.684 "name": "Nvme0" 00:13:40.684 }, 00:13:40.684 "method": "bdev_nvme_attach_controller" 00:13:40.684 }, 00:13:40.684 { 00:13:40.684 "method": "bdev_wait_for_examine" 00:13:40.684 } 00:13:40.684 ] 00:13:40.684 } 00:13:40.684 ] 00:13:40.684 } 00:13:40.941 [2024-05-15 13:31:53.783798] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:40.941 [2024-05-15 13:31:53.802931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.941 [2024-05-15 13:31:53.860494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.198  Copying: 60/60 [kB] (average 58 MBps) 00:13:41.198 00:13:41.198 13:31:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:13:41.198 13:31:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:13:41.198 13:31:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:13:41.198 13:31:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:41.198 [2024-05-15 13:31:54.257609] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:41.198 [2024-05-15 13:31:54.258011] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75700 ] 00:13:41.198 { 00:13:41.198 "subsystems": [ 00:13:41.198 { 00:13:41.198 "subsystem": "bdev", 00:13:41.198 "config": [ 00:13:41.198 { 00:13:41.198 "params": { 00:13:41.198 "trtype": "pcie", 00:13:41.198 "traddr": "0000:00:10.0", 00:13:41.198 "name": "Nvme0" 00:13:41.198 }, 00:13:41.198 "method": "bdev_nvme_attach_controller" 00:13:41.198 }, 00:13:41.198 { 00:13:41.198 "method": "bdev_wait_for_examine" 00:13:41.198 } 00:13:41.198 ] 00:13:41.198 } 00:13:41.198 ] 00:13:41.198 } 00:13:41.455 [2024-05-15 13:31:54.382675] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:41.455 [2024-05-15 13:31:54.400608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.455 [2024-05-15 13:31:54.451742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.713  Copying: 60/60 [kB] (average 29 MBps) 00:13:41.713 00:13:41.713 13:31:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:41.713 13:31:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:13:41.713 13:31:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:13:41.713 13:31:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:13:41.713 13:31:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:13:41.713 13:31:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:13:41.713 13:31:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:13:41.713 13:31:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:13:41.713 13:31:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:13:41.713 13:31:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:13:41.713 13:31:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:41.971 [2024-05-15 13:31:54.814667] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:41.971 [2024-05-15 13:31:54.814971] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75710 ] 00:13:41.971 { 00:13:41.971 "subsystems": [ 00:13:41.971 { 00:13:41.971 "subsystem": "bdev", 00:13:41.971 "config": [ 00:13:41.971 { 00:13:41.971 "params": { 00:13:41.971 "trtype": "pcie", 00:13:41.971 "traddr": "0000:00:10.0", 00:13:41.971 "name": "Nvme0" 00:13:41.971 }, 00:13:41.971 "method": "bdev_nvme_attach_controller" 00:13:41.971 }, 00:13:41.971 { 00:13:41.971 "method": "bdev_wait_for_examine" 00:13:41.971 } 00:13:41.971 ] 00:13:41.971 } 00:13:41.971 ] 00:13:41.971 } 00:13:41.971 [2024-05-15 13:31:54.935016] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:41.971 [2024-05-15 13:31:54.953283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.971 [2024-05-15 13:31:55.007615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.229  Copying: 1024/1024 [kB] (average 500 MBps) 00:13:42.229 00:13:42.486 13:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:13:42.486 13:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:13:42.486 13:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:13:42.486 13:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:13:42.486 13:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:13:42.486 13:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:13:42.486 13:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:13:42.486 13:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:43.052 13:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:13:43.052 13:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:13:43.052 13:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:13:43.052 13:31:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:43.052 [2024-05-15 13:31:55.896303] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:43.052 [2024-05-15 13:31:55.896666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75729 ] 00:13:43.052 { 00:13:43.052 "subsystems": [ 00:13:43.052 { 00:13:43.052 "subsystem": "bdev", 00:13:43.052 "config": [ 00:13:43.052 { 00:13:43.052 "params": { 00:13:43.052 "trtype": "pcie", 00:13:43.052 "traddr": "0000:00:10.0", 00:13:43.052 "name": "Nvme0" 00:13:43.052 }, 00:13:43.052 "method": "bdev_nvme_attach_controller" 00:13:43.052 }, 00:13:43.052 { 00:13:43.052 "method": "bdev_wait_for_examine" 00:13:43.052 } 00:13:43.052 ] 00:13:43.052 } 00:13:43.052 ] 00:13:43.052 } 00:13:43.052 [2024-05-15 13:31:56.023988] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:43.052 [2024-05-15 13:31:56.043978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.052 [2024-05-15 13:31:56.098292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.567  Copying: 56/56 [kB] (average 54 MBps) 00:13:43.567 00:13:43.567 13:31:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:13:43.567 13:31:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:13:43.567 13:31:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:13:43.567 13:31:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:43.567 { 00:13:43.567 "subsystems": [ 00:13:43.567 { 00:13:43.567 "subsystem": "bdev", 00:13:43.567 "config": [ 00:13:43.567 { 00:13:43.567 "params": { 00:13:43.567 "trtype": "pcie", 00:13:43.567 "traddr": "0000:00:10.0", 00:13:43.567 "name": "Nvme0" 00:13:43.567 }, 00:13:43.567 "method": "bdev_nvme_attach_controller" 00:13:43.567 }, 00:13:43.567 { 00:13:43.567 "method": "bdev_wait_for_examine" 00:13:43.567 } 00:13:43.567 ] 00:13:43.567 } 00:13:43.567 ] 00:13:43.567 } 00:13:43.567 [2024-05-15 13:31:56.469646] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:43.567 [2024-05-15 13:31:56.469909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75748 ] 00:13:43.567 [2024-05-15 13:31:56.602488] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:43.567 [2024-05-15 13:31:56.621434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.843 [2024-05-15 13:31:56.672634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.100  Copying: 56/56 [kB] (average 27 MBps) 00:13:44.100 00:13:44.100 13:31:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:44.100 13:31:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:13:44.100 13:31:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:13:44.100 13:31:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:13:44.100 13:31:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:13:44.100 13:31:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:13:44.100 13:31:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:13:44.100 13:31:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:13:44.100 13:31:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:13:44.100 13:31:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:13:44.100 13:31:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:44.100 [2024-05-15 13:31:57.036532] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:44.100 [2024-05-15 13:31:57.036625] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75758 ] 00:13:44.100 { 00:13:44.100 "subsystems": [ 00:13:44.100 { 00:13:44.100 "subsystem": "bdev", 00:13:44.100 "config": [ 00:13:44.100 { 00:13:44.100 "params": { 00:13:44.100 "trtype": "pcie", 00:13:44.100 "traddr": "0000:00:10.0", 00:13:44.100 "name": "Nvme0" 00:13:44.100 }, 00:13:44.100 "method": "bdev_nvme_attach_controller" 00:13:44.100 }, 00:13:44.100 { 00:13:44.100 "method": "bdev_wait_for_examine" 00:13:44.100 } 00:13:44.100 ] 00:13:44.100 } 00:13:44.100 ] 00:13:44.100 } 00:13:44.100 [2024-05-15 13:31:57.157820] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:44.100 [2024-05-15 13:31:57.172096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.357 [2024-05-15 13:31:57.223626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.616  Copying: 1024/1024 [kB] (average 1000 MBps) 00:13:44.616 00:13:44.616 13:31:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:13:44.616 13:31:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:13:44.616 13:31:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:13:44.616 13:31:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:13:44.616 13:31:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:13:44.616 13:31:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:13:44.616 13:31:57 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:45.182 13:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:13:45.182 13:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:13:45.182 13:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:13:45.182 13:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:45.182 { 00:13:45.182 "subsystems": [ 00:13:45.182 { 00:13:45.182 "subsystem": "bdev", 00:13:45.182 "config": [ 00:13:45.182 { 00:13:45.182 "params": { 00:13:45.182 "trtype": "pcie", 00:13:45.182 "traddr": "0000:00:10.0", 00:13:45.182 "name": "Nvme0" 00:13:45.182 }, 00:13:45.183 "method": "bdev_nvme_attach_controller" 00:13:45.183 }, 00:13:45.183 { 00:13:45.183 "method": "bdev_wait_for_examine" 00:13:45.183 } 00:13:45.183 ] 00:13:45.183 } 00:13:45.183 ] 00:13:45.183 } 00:13:45.183 [2024-05-15 13:31:58.160204] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:45.183 [2024-05-15 13:31:58.160330] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75783 ] 00:13:45.441 [2024-05-15 13:31:58.287150] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:45.441 [2024-05-15 13:31:58.305163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.441 [2024-05-15 13:31:58.357348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.699  Copying: 56/56 [kB] (average 54 MBps) 00:13:45.699 00:13:45.699 13:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:13:45.699 13:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:13:45.699 13:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:13:45.699 13:31:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:45.699 { 00:13:45.699 "subsystems": [ 00:13:45.699 { 00:13:45.699 "subsystem": "bdev", 00:13:45.699 "config": [ 00:13:45.699 { 00:13:45.699 "params": { 00:13:45.699 "trtype": "pcie", 00:13:45.699 "traddr": "0000:00:10.0", 00:13:45.699 "name": "Nvme0" 00:13:45.699 }, 00:13:45.699 "method": "bdev_nvme_attach_controller" 00:13:45.699 }, 00:13:45.699 { 00:13:45.699 "method": "bdev_wait_for_examine" 00:13:45.699 } 00:13:45.699 ] 00:13:45.699 } 00:13:45.699 ] 00:13:45.699 } 00:13:45.699 [2024-05-15 13:31:58.747145] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:45.699 [2024-05-15 13:31:58.747305] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75796 ] 00:13:45.982 [2024-05-15 13:31:58.881284] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:45.982 [2024-05-15 13:31:58.894022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.982 [2024-05-15 13:31:58.946006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.239  Copying: 56/56 [kB] (average 54 MBps) 00:13:46.239 00:13:46.239 13:31:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:46.239 13:31:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:13:46.239 13:31:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:13:46.239 13:31:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:13:46.239 13:31:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:13:46.239 13:31:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:13:46.239 13:31:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:13:46.239 13:31:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:13:46.239 13:31:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:13:46.239 13:31:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:13:46.239 13:31:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:46.239 [2024-05-15 13:31:59.310796] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:46.239 [2024-05-15 13:31:59.310890] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75812 ] 00:13:46.239 { 00:13:46.239 "subsystems": [ 00:13:46.239 { 00:13:46.239 "subsystem": "bdev", 00:13:46.239 "config": [ 00:13:46.239 { 00:13:46.239 "params": { 00:13:46.239 "trtype": "pcie", 00:13:46.239 "traddr": "0000:00:10.0", 00:13:46.239 "name": "Nvme0" 00:13:46.239 }, 00:13:46.239 "method": "bdev_nvme_attach_controller" 00:13:46.239 }, 00:13:46.239 { 00:13:46.239 "method": "bdev_wait_for_examine" 00:13:46.239 } 00:13:46.239 ] 00:13:46.239 } 00:13:46.239 ] 00:13:46.239 } 00:13:46.497 [2024-05-15 13:31:59.431764] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:46.497 [2024-05-15 13:31:59.451845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.497 [2024-05-15 13:31:59.511271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.756  Copying: 1024/1024 [kB] (average 1000 MBps) 00:13:46.756 00:13:46.756 13:31:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:13:46.756 13:31:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:13:46.756 13:31:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:13:46.756 13:31:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:13:46.756 13:31:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:13:46.756 13:31:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:13:46.756 13:31:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:13:46.756 13:31:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:47.689 13:32:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:13:47.689 13:32:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:13:47.689 13:32:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:13:47.689 13:32:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:47.689 [2024-05-15 13:32:00.486822] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:47.689 [2024-05-15 13:32:00.486952] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75831 ] 00:13:47.689 { 00:13:47.689 "subsystems": [ 00:13:47.689 { 00:13:47.689 "subsystem": "bdev", 00:13:47.689 "config": [ 00:13:47.689 { 00:13:47.689 "params": { 00:13:47.689 "trtype": "pcie", 00:13:47.689 "traddr": "0000:00:10.0", 00:13:47.689 "name": "Nvme0" 00:13:47.689 }, 00:13:47.689 "method": "bdev_nvme_attach_controller" 00:13:47.689 }, 00:13:47.689 { 00:13:47.689 "method": "bdev_wait_for_examine" 00:13:47.689 } 00:13:47.689 ] 00:13:47.689 } 00:13:47.689 ] 00:13:47.689 } 00:13:47.689 [2024-05-15 13:32:00.614795] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:47.689 [2024-05-15 13:32:00.632494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.689 [2024-05-15 13:32:00.714580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.947  Copying: 48/48 [kB] (average 46 MBps) 00:13:47.947 00:13:48.204 13:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:13:48.205 13:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:13:48.205 13:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:13:48.205 13:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:48.205 { 00:13:48.205 "subsystems": [ 00:13:48.205 { 00:13:48.205 "subsystem": "bdev", 00:13:48.205 "config": [ 00:13:48.205 { 00:13:48.205 "params": { 00:13:48.205 "trtype": "pcie", 00:13:48.205 "traddr": "0000:00:10.0", 00:13:48.205 "name": "Nvme0" 00:13:48.205 }, 00:13:48.205 "method": "bdev_nvme_attach_controller" 00:13:48.205 }, 00:13:48.205 { 00:13:48.205 "method": "bdev_wait_for_examine" 00:13:48.205 } 00:13:48.205 ] 00:13:48.205 } 00:13:48.205 ] 00:13:48.205 } 00:13:48.205 [2024-05-15 13:32:01.099739] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:48.205 [2024-05-15 13:32:01.099846] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75844 ] 00:13:48.205 [2024-05-15 13:32:01.227479] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:48.205 [2024-05-15 13:32:01.249956] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.463 [2024-05-15 13:32:01.303552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.721  Copying: 48/48 [kB] (average 46 MBps) 00:13:48.721 00:13:48.721 13:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:48.721 13:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:13:48.721 13:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:13:48.721 13:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:13:48.721 13:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:13:48.721 13:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:13:48.721 13:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:13:48.721 13:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:13:48.721 13:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:13:48.721 13:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:13:48.721 13:32:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:48.721 [2024-05-15 13:32:01.667933] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:48.721 [2024-05-15 13:32:01.668025] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75864 ] 00:13:48.721 { 00:13:48.721 "subsystems": [ 00:13:48.721 { 00:13:48.721 "subsystem": "bdev", 00:13:48.721 "config": [ 00:13:48.721 { 00:13:48.721 "params": { 00:13:48.721 "trtype": "pcie", 00:13:48.721 "traddr": "0000:00:10.0", 00:13:48.721 "name": "Nvme0" 00:13:48.721 }, 00:13:48.721 "method": "bdev_nvme_attach_controller" 00:13:48.721 }, 00:13:48.721 { 00:13:48.721 "method": "bdev_wait_for_examine" 00:13:48.721 } 00:13:48.721 ] 00:13:48.721 } 00:13:48.721 ] 00:13:48.721 } 00:13:48.721 [2024-05-15 13:32:01.788403] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:48.721 [2024-05-15 13:32:01.802596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.979 [2024-05-15 13:32:01.855925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.237  Copying: 1024/1024 [kB] (average 1000 MBps) 00:13:49.237 00:13:49.237 13:32:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:13:49.237 13:32:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:13:49.237 13:32:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:13:49.237 13:32:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:13:49.237 13:32:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:13:49.237 13:32:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:13:49.237 13:32:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:49.802 13:32:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:13:49.802 13:32:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:13:49.802 13:32:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:13:49.802 13:32:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:49.802 [2024-05-15 13:32:02.744743] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:49.802 [2024-05-15 13:32:02.744836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75883 ] 00:13:49.802 { 00:13:49.802 "subsystems": [ 00:13:49.802 { 00:13:49.802 "subsystem": "bdev", 00:13:49.802 "config": [ 00:13:49.802 { 00:13:49.802 "params": { 00:13:49.802 "trtype": "pcie", 00:13:49.802 "traddr": "0000:00:10.0", 00:13:49.802 "name": "Nvme0" 00:13:49.802 }, 00:13:49.802 "method": "bdev_nvme_attach_controller" 00:13:49.802 }, 00:13:49.802 { 00:13:49.802 "method": "bdev_wait_for_examine" 00:13:49.802 } 00:13:49.802 ] 00:13:49.802 } 00:13:49.802 ] 00:13:49.802 } 00:13:49.802 [2024-05-15 13:32:02.865286] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:49.802 [2024-05-15 13:32:02.879914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.060 [2024-05-15 13:32:02.940290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.318  Copying: 48/48 [kB] (average 46 MBps) 00:13:50.318 00:13:50.318 13:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:13:50.318 13:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:13:50.318 13:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:13:50.318 13:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:50.318 [2024-05-15 13:32:03.315194] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:50.318 [2024-05-15 13:32:03.315329] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75898 ] 00:13:50.318 { 00:13:50.318 "subsystems": [ 00:13:50.318 { 00:13:50.318 "subsystem": "bdev", 00:13:50.318 "config": [ 00:13:50.318 { 00:13:50.318 "params": { 00:13:50.318 "trtype": "pcie", 00:13:50.318 "traddr": "0000:00:10.0", 00:13:50.318 "name": "Nvme0" 00:13:50.318 }, 00:13:50.318 "method": "bdev_nvme_attach_controller" 00:13:50.318 }, 00:13:50.318 { 00:13:50.318 "method": "bdev_wait_for_examine" 00:13:50.318 } 00:13:50.318 ] 00:13:50.318 } 00:13:50.318 ] 00:13:50.318 } 00:13:50.576 [2024-05-15 13:32:03.438424] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:50.576 [2024-05-15 13:32:03.454674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.576 [2024-05-15 13:32:03.525647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.835  Copying: 48/48 [kB] (average 46 MBps) 00:13:50.835 00:13:50.835 13:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:50.835 13:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:13:50.835 13:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:13:50.835 13:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:13:50.835 13:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:13:50.835 13:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:13:50.835 13:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:13:50.835 13:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:13:50.835 13:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:13:50.835 13:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:13:50.835 13:32:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:50.835 [2024-05-15 13:32:03.912431] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:50.835 [2024-05-15 13:32:03.912542] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75913 ] 00:13:50.835 { 00:13:50.835 "subsystems": [ 00:13:50.835 { 00:13:50.835 "subsystem": "bdev", 00:13:50.835 "config": [ 00:13:50.835 { 00:13:50.835 "params": { 00:13:50.835 "trtype": "pcie", 00:13:50.835 "traddr": "0000:00:10.0", 00:13:50.835 "name": "Nvme0" 00:13:50.835 }, 00:13:50.835 "method": "bdev_nvme_attach_controller" 00:13:50.835 }, 00:13:50.835 { 00:13:50.835 "method": "bdev_wait_for_examine" 00:13:50.835 } 00:13:50.835 ] 00:13:50.835 } 00:13:50.835 ] 00:13:50.835 } 00:13:51.091 [2024-05-15 13:32:04.034216] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:51.091 [2024-05-15 13:32:04.048418] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.091 [2024-05-15 13:32:04.126542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.378  Copying: 1024/1024 [kB] (average 1000 MBps) 00:13:51.378 00:13:51.378 00:13:51.378 real 0m14.032s 00:13:51.378 user 0m9.860s 00:13:51.378 sys 0m5.140s 00:13:51.378 13:32:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:51.378 13:32:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:13:51.378 ************************************ 00:13:51.378 END TEST dd_rw 00:13:51.378 ************************************ 00:13:51.650 13:32:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:13:51.650 13:32:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:51.650 13:32:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:51.650 13:32:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:13:51.650 ************************************ 00:13:51.650 START TEST dd_rw_offset 00:13:51.650 ************************************ 00:13:51.650 13:32:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1121 -- # basic_offset 00:13:51.650 13:32:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:13:51.650 13:32:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:13:51.650 13:32:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:13:51.650 13:32:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:13:51.650 13:32:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:13:51.650 13:32:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=efk5y8c78d8jur96d5t5tn9i5ef5sy1elo2s44rj5fwf9kj6qfj8xcmj6lj1paf846ce4girl9i1ulv9tzdmrnx38c5oxzbtwfzuyy6js9o2o3yovedketafqnh5kgfrqi01prn5loa39mju2mx592q00kohlpin6iyvbh4sikpysjmsvlrelj8tw4fr9vsjz6cqb3g42jwp5d8cyd7rndjmjp06pk9obaswfk82fam7b30ddgfj3zoiapp68dze13eksplni4u665a1cz0s436a7e2109a1zrhbdo5hryyeesou9gw9ehw2koebvhuxgkcwd3eiy1zn28khtgxnxpae62o02bk5y45jolbvhhjx4c512e7jug12ixw32bbrm9iv16gxuqwz8g3sldjrhcbqmcuk9gs6v1nf7d5ltu6umlqdpzgcf5e7na0quuj4d0r5e4tpdv7v1g6oxcw5168czafg9zt31thpc4b13i8l8wayse1dfcoeivlrzbz4cph1k80aufr08xwoy5jt2551w7vwt3qfp49ibtm9b7itejkwt7bve8dxslvthynwgytstu7ox732s3y3aily8k0tdjb3wzij2s5ar1vgtpwwbqpscscgw66arprk78bvxf205gq3tc8gplchog60y21etkdiiea7htnmxbumo5phpu3vdqyk5aao24xl4eddwfzntvf04tfm0g64v0nt2x14zpkl055ke5z0xewy4d9t3bsiopxexgdmcq7vb1mlqt24w2tgboyo0ok2ofet4wp9pr6fffgw3ur67t7gnaxbzd6tems45v1pepfwo4pbmniy98acz16b0whgeizdl48ss4alkqwk9645mrymcpx6edmvu4jat5wdueqs6jvhw5a1c4zzvl4rm51x6qsnf4w97vefpa6lox09848dz34ndy2jda5chzwiy8yz6yb1kibww2utdxoxw5avtxm6z9jigs0gd44qors1e8a4o29pussvm6e6ivrgmrt8l8mn8t9x1eigiq7pkjdpgupzn90nm7q8yxpz584mppc3hud41jzoodxdehv3tzbmk2nw7i67nh45pfwr9djvo2d5huqomf4df6830ojmypiinb370u7zsewkmydwqfl14571p0vh278x8lazftddyccwiahc03e671piccjjo21y166o6hqo6foikm6o2cfv2ah6e5bqn4wmbc035aaeruwh4vtnz2inu404wci61xlhwgut51dz8h92447u6xk8y86sl6csmnrtmoerckaaojq8j552kah42mlcprsa52tik5nlomjp8ycp7papdi21uwcmlxmdf3pril94qozk0y86y9nq93y379x52digeu9r2jmcqoyegejktcrl5ivjd9hrn5r1tq3i5rdzqg4e2vln4lf1whtd821mdp0s4bkoz8ttq3bvdqn6btorvl6p58di9l3dxnm5wkrdlang3fgl8a24lziuj5z3i8nyw6lwo4iu3h7h6no4gr54xmotpdeilk43f138knwh9psgobcpe9y06e8qv02ysjacojic49a73eantixqidgdlydtboy19qu98jm11qvn5kkdchcgss19u9lo59im6joayc2g9cqn6s310uu4tw5m2tk74sm8dcj4ny8bsnqx2oat6xiyn5kus91nt8s3qyw8qif7y4vygqmm6z10e4aoyu45qzegymcbynuniwujsfaisj76h3q7yck9g7xnel7zy1frgymze5a1ervxgdb3h9t0uizxnwgrg91i3q8inb09ovk8levh24o5kpmb4vr377k5uh4uh13g915evsjgjkl7u57xc46zvy89j5jb2tnamsuq5191f78ej7bj7a3s52aewm66lacr0z6xy5jchgvx6ri4gqvpced3w4pm4v8rruyd7hgqjildbsn5mfst36890rlwyifut278lkkq0qg5h1nux6swt6j2r8mn9icung2i0c7qbz2d3acxzzarj2fw9kjl56e1j8np0rx0s52u0rxtxt8duzor75ws9hg1ueyxl4m8tjw1olflbgql09d1kq0zgph9odcv1k8ogp0rk6nrcw6uscige2nlcgqewm010pk1tdxr5drm8to2kp2j6venrxj822wrorb06vkvx8cnecr1w5ue7yhht831hoaljetl7t6cterhge5iqitpnt7huacbb68x1mxzio5lktuucq8gmh18niqn6lgmp10v0lab89qmhui0o9zpwk77u20vhryt8ktr7asco5o8m7dst3u0z1k4ur6wpvw8c8azri2gx20nmviolwz80yoas6r6ggde4d75n04gb351v6vf65dhlj5y8y6ktr7gme2gs8oftilp90cdsvhjy9e3ttg249m3a6m0v9bthxdou2xnq84xycnkxe635q22okpi5t57uqrxtf6rd3x9c1lvf0upwjgsqpkkajs1ppx52mv1lvro27ifi59z48b12d15947r1jikdqmz4u4vswievkzft92m1tdex6935fxdinkc1kaxtamodrvucxdz6mdao5b8a4impsss4k1sm69z1q60ycoocsrtdsf8b1m93e6i9k0r1uv76i3h9cgzjz47d6k4p9ltfire7sbzbgc0m2wwwhtns0tye5e1a3f1r3xumcqicrellatd31qnohv62sy3jbgi5qagdcpbnt9x6ylo60dcemj2s9kqtlbgazzvrhcyjvnblurndahidap90uwbta2oxnhy9bw17jeh3rfpr7mvcpdv836i40g908utek25r0fmzlklhohb5gcv2ara8yl7lipcytlzbgugqzd4laul41cceelur51l5w89b5u3xfjnkm90scsxaf4spy7beuq1h89h17h2jqpopwlm28c2z5rb8v7g8wya43j0hx04i7py3o5up10vixqzbk91v1npephinbq9x24qs0fur258e2fzn8tyxpybnboj3whb7jglvnpwwcklig1yedo2ejy8zs4izm7tw2kjx818vlm1giwn845cf54xowhdiyw0a474qaaw5tq3cp7h5mraf50zcdfsbtqe4nbtg12393mb7m0t392tp2qble0e7136ugci7sfb6uzsky8sxgfbphjqexl0lh7yntm01cplewsf4mkueptcyxz92r6sjf091qwu0mjrkbra6yy2u7pmx7nnr6itnsqunict2t3hycemofbxrqew0axni6l7rspzcqyu9jekoymbqbht515x1lrk8fjr3q0iumqpupjxh7232lgmrf3osavh4gh70lpo16bng4eykhfglz1ztyr5nsipopz43x964plmzjldc1rgzxrt6noxjak6ngp1ieaaa350sxjqvusze78hxcptj1mhnx5pitiearnt3qfjjmq263eazgbj7xifz5f2anudjq8b310ybu2uayj0n71tm1i0zl3gl3bh4ynur7yjugvbfsclc3iuwdv8x6deidyluixkz9o608c56etjiniwppiq6swtrx6228j4rjk5cwtw51jmzurul9f05vkzql431l0iom7bfdkzjl0u22wme655asct1meq1yznug0u6eayqkbhswzpdnfgexki81tlkxr58kti5i7nph156mguk4aypixp9b7lkn20vi33jb4givbewfj4asoh0p2kguqk8q5paool1zbgnig63rbx2urpvrh8i7kshce9tzszcjuene2q60sgyl6trplus9ksrr8a3ai4wa43n0vv3xbfwtmluwzentuy9k4ml4sff6h7q71t347vwlu9lj2pu2059c11uikn3mey16tj96ve9skdxjqwfufrqz2ajrwrngryx80gtp0xmkxv8yd3m7n031oiyzeqwrzhd3q2zgn69mvrg32thkwqp8lrh9ulfryjyb1uatsqu1ng4qb797l6qnwldp3nbegu09wtgfr8g75myy9fq8yhhp1fha3qnqmunnvuy2pud6lkodffzrlrdbhsj2v1ukv8s1807dgsu68de84cz893ibqvb8ee13sdnnyyv54sjocdmtp4 00:13:51.650 13:32:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:13:51.650 13:32:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:13:51.650 13:32:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:13:51.650 13:32:04 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:13:51.650 { 00:13:51.650 "subsystems": [ 00:13:51.650 { 00:13:51.650 "subsystem": "bdev", 00:13:51.650 "config": [ 00:13:51.650 { 00:13:51.650 "params": { 00:13:51.650 "trtype": "pcie", 00:13:51.650 "traddr": "0000:00:10.0", 00:13:51.650 "name": "Nvme0" 00:13:51.650 }, 00:13:51.650 "method": "bdev_nvme_attach_controller" 00:13:51.650 }, 00:13:51.650 { 00:13:51.650 "method": "bdev_wait_for_examine" 00:13:51.650 } 00:13:51.650 ] 00:13:51.650 } 00:13:51.650 ] 00:13:51.650 } 00:13:51.650 [2024-05-15 13:32:04.601968] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:51.650 [2024-05-15 13:32:04.602115] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75944 ] 00:13:51.651 [2024-05-15 13:32:04.732858] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:51.651 [2024-05-15 13:32:04.747968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.908 [2024-05-15 13:32:04.803829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.167  Copying: 4096/4096 [B] (average 4000 kBps) 00:13:52.167 00:13:52.167 13:32:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:13:52.167 13:32:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:13:52.167 13:32:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:13:52.167 13:32:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:13:52.167 [2024-05-15 13:32:05.185437] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:52.167 [2024-05-15 13:32:05.185526] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75957 ] 00:13:52.167 { 00:13:52.167 "subsystems": [ 00:13:52.167 { 00:13:52.167 "subsystem": "bdev", 00:13:52.167 "config": [ 00:13:52.167 { 00:13:52.167 "params": { 00:13:52.167 "trtype": "pcie", 00:13:52.167 "traddr": "0000:00:10.0", 00:13:52.167 "name": "Nvme0" 00:13:52.167 }, 00:13:52.167 "method": "bdev_nvme_attach_controller" 00:13:52.167 }, 00:13:52.167 { 00:13:52.167 "method": "bdev_wait_for_examine" 00:13:52.167 } 00:13:52.167 ] 00:13:52.167 } 00:13:52.167 ] 00:13:52.167 } 00:13:52.426 [2024-05-15 13:32:05.310597] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:52.426 [2024-05-15 13:32:05.327402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.426 [2024-05-15 13:32:05.382193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.686  Copying: 4096/4096 [B] (average 4000 kBps) 00:13:52.686 00:13:52.686 13:32:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:13:52.686 ************************************ 00:13:52.686 END TEST dd_rw_offset 00:13:52.686 13:32:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ efk5y8c78d8jur96d5t5tn9i5ef5sy1elo2s44rj5fwf9kj6qfj8xcmj6lj1paf846ce4girl9i1ulv9tzdmrnx38c5oxzbtwfzuyy6js9o2o3yovedketafqnh5kgfrqi01prn5loa39mju2mx592q00kohlpin6iyvbh4sikpysjmsvlrelj8tw4fr9vsjz6cqb3g42jwp5d8cyd7rndjmjp06pk9obaswfk82fam7b30ddgfj3zoiapp68dze13eksplni4u665a1cz0s436a7e2109a1zrhbdo5hryyeesou9gw9ehw2koebvhuxgkcwd3eiy1zn28khtgxnxpae62o02bk5y45jolbvhhjx4c512e7jug12ixw32bbrm9iv16gxuqwz8g3sldjrhcbqmcuk9gs6v1nf7d5ltu6umlqdpzgcf5e7na0quuj4d0r5e4tpdv7v1g6oxcw5168czafg9zt31thpc4b13i8l8wayse1dfcoeivlrzbz4cph1k80aufr08xwoy5jt2551w7vwt3qfp49ibtm9b7itejkwt7bve8dxslvthynwgytstu7ox732s3y3aily8k0tdjb3wzij2s5ar1vgtpwwbqpscscgw66arprk78bvxf205gq3tc8gplchog60y21etkdiiea7htnmxbumo5phpu3vdqyk5aao24xl4eddwfzntvf04tfm0g64v0nt2x14zpkl055ke5z0xewy4d9t3bsiopxexgdmcq7vb1mlqt24w2tgboyo0ok2ofet4wp9pr6fffgw3ur67t7gnaxbzd6tems45v1pepfwo4pbmniy98acz16b0whgeizdl48ss4alkqwk9645mrymcpx6edmvu4jat5wdueqs6jvhw5a1c4zzvl4rm51x6qsnf4w97vefpa6lox09848dz34ndy2jda5chzwiy8yz6yb1kibww2utdxoxw5avtxm6z9jigs0gd44qors1e8a4o29pussvm6e6ivrgmrt8l8mn8t9x1eigiq7pkjdpgupzn90nm7q8yxpz584mppc3hud41jzoodxdehv3tzbmk2nw7i67nh45pfwr9djvo2d5huqomf4df6830ojmypiinb370u7zsewkmydwqfl14571p0vh278x8lazftddyccwiahc03e671piccjjo21y166o6hqo6foikm6o2cfv2ah6e5bqn4wmbc035aaeruwh4vtnz2inu404wci61xlhwgut51dz8h92447u6xk8y86sl6csmnrtmoerckaaojq8j552kah42mlcprsa52tik5nlomjp8ycp7papdi21uwcmlxmdf3pril94qozk0y86y9nq93y379x52digeu9r2jmcqoyegejktcrl5ivjd9hrn5r1tq3i5rdzqg4e2vln4lf1whtd821mdp0s4bkoz8ttq3bvdqn6btorvl6p58di9l3dxnm5wkrdlang3fgl8a24lziuj5z3i8nyw6lwo4iu3h7h6no4gr54xmotpdeilk43f138knwh9psgobcpe9y06e8qv02ysjacojic49a73eantixqidgdlydtboy19qu98jm11qvn5kkdchcgss19u9lo59im6joayc2g9cqn6s310uu4tw5m2tk74sm8dcj4ny8bsnqx2oat6xiyn5kus91nt8s3qyw8qif7y4vygqmm6z10e4aoyu45qzegymcbynuniwujsfaisj76h3q7yck9g7xnel7zy1frgymze5a1ervxgdb3h9t0uizxnwgrg91i3q8inb09ovk8levh24o5kpmb4vr377k5uh4uh13g915evsjgjkl7u57xc46zvy89j5jb2tnamsuq5191f78ej7bj7a3s52aewm66lacr0z6xy5jchgvx6ri4gqvpced3w4pm4v8rruyd7hgqjildbsn5mfst36890rlwyifut278lkkq0qg5h1nux6swt6j2r8mn9icung2i0c7qbz2d3acxzzarj2fw9kjl56e1j8np0rx0s52u0rxtxt8duzor75ws9hg1ueyxl4m8tjw1olflbgql09d1kq0zgph9odcv1k8ogp0rk6nrcw6uscige2nlcgqewm010pk1tdxr5drm8to2kp2j6venrxj822wrorb06vkvx8cnecr1w5ue7yhht831hoaljetl7t6cterhge5iqitpnt7huacbb68x1mxzio5lktuucq8gmh18niqn6lgmp10v0lab89qmhui0o9zpwk77u20vhryt8ktr7asco5o8m7dst3u0z1k4ur6wpvw8c8azri2gx20nmviolwz80yoas6r6ggde4d75n04gb351v6vf65dhlj5y8y6ktr7gme2gs8oftilp90cdsvhjy9e3ttg249m3a6m0v9bthxdou2xnq84xycnkxe635q22okpi5t57uqrxtf6rd3x9c1lvf0upwjgsqpkkajs1ppx52mv1lvro27ifi59z48b12d15947r1jikdqmz4u4vswievkzft92m1tdex6935fxdinkc1kaxtamodrvucxdz6mdao5b8a4impsss4k1sm69z1q60ycoocsrtdsf8b1m93e6i9k0r1uv76i3h9cgzjz47d6k4p9ltfire7sbzbgc0m2wwwhtns0tye5e1a3f1r3xumcqicrellatd31qnohv62sy3jbgi5qagdcpbnt9x6ylo60dcemj2s9kqtlbgazzvrhcyjvnblurndahidap90uwbta2oxnhy9bw17jeh3rfpr7mvcpdv836i40g908utek25r0fmzlklhohb5gcv2ara8yl7lipcytlzbgugqzd4laul41cceelur51l5w89b5u3xfjnkm90scsxaf4spy7beuq1h89h17h2jqpopwlm28c2z5rb8v7g8wya43j0hx04i7py3o5up10vixqzbk91v1npephinbq9x24qs0fur258e2fzn8tyxpybnboj3whb7jglvnpwwcklig1yedo2ejy8zs4izm7tw2kjx818vlm1giwn845cf54xowhdiyw0a474qaaw5tq3cp7h5mraf50zcdfsbtqe4nbtg12393mb7m0t392tp2qble0e7136ugci7sfb6uzsky8sxgfbphjqexl0lh7yntm01cplewsf4mkueptcyxz92r6sjf091qwu0mjrkbra6yy2u7pmx7nnr6itnsqunict2t3hycemofbxrqew0axni6l7rspzcqyu9jekoymbqbht515x1lrk8fjr3q0iumqpupjxh7232lgmrf3osavh4gh70lpo16bng4eykhfglz1ztyr5nsipopz43x964plmzjldc1rgzxrt6noxjak6ngp1ieaaa350sxjqvusze78hxcptj1mhnx5pitiearnt3qfjjmq263eazgbj7xifz5f2anudjq8b310ybu2uayj0n71tm1i0zl3gl3bh4ynur7yjugvbfsclc3iuwdv8x6deidyluixkz9o608c56etjiniwppiq6swtrx6228j4rjk5cwtw51jmzurul9f05vkzql431l0iom7bfdkzjl0u22wme655asct1meq1yznug0u6eayqkbhswzpdnfgexki81tlkxr58kti5i7nph156mguk4aypixp9b7lkn20vi33jb4givbewfj4asoh0p2kguqk8q5paool1zbgnig63rbx2urpvrh8i7kshce9tzszcjuene2q60sgyl6trplus9ksrr8a3ai4wa43n0vv3xbfwtmluwzentuy9k4ml4sff6h7q71t347vwlu9lj2pu2059c11uikn3mey16tj96ve9skdxjqwfufrqz2ajrwrngryx80gtp0xmkxv8yd3m7n031oiyzeqwrzhd3q2zgn69mvrg32thkwqp8lrh9ulfryjyb1uatsqu1ng4qb797l6qnwldp3nbegu09wtgfr8g75myy9fq8yhhp1fha3qnqmunnvuy2pud6lkodffzrlrdbhsj2v1ukv8s1807dgsu68de84cz893ibqvb8ee13sdnnyyv54sjocdmtp4 == \e\f\k\5\y\8\c\7\8\d\8\j\u\r\9\6\d\5\t\5\t\n\9\i\5\e\f\5\s\y\1\e\l\o\2\s\4\4\r\j\5\f\w\f\9\k\j\6\q\f\j\8\x\c\m\j\6\l\j\1\p\a\f\8\4\6\c\e\4\g\i\r\l\9\i\1\u\l\v\9\t\z\d\m\r\n\x\3\8\c\5\o\x\z\b\t\w\f\z\u\y\y\6\j\s\9\o\2\o\3\y\o\v\e\d\k\e\t\a\f\q\n\h\5\k\g\f\r\q\i\0\1\p\r\n\5\l\o\a\3\9\m\j\u\2\m\x\5\9\2\q\0\0\k\o\h\l\p\i\n\6\i\y\v\b\h\4\s\i\k\p\y\s\j\m\s\v\l\r\e\l\j\8\t\w\4\f\r\9\v\s\j\z\6\c\q\b\3\g\4\2\j\w\p\5\d\8\c\y\d\7\r\n\d\j\m\j\p\0\6\p\k\9\o\b\a\s\w\f\k\8\2\f\a\m\7\b\3\0\d\d\g\f\j\3\z\o\i\a\p\p\6\8\d\z\e\1\3\e\k\s\p\l\n\i\4\u\6\6\5\a\1\c\z\0\s\4\3\6\a\7\e\2\1\0\9\a\1\z\r\h\b\d\o\5\h\r\y\y\e\e\s\o\u\9\g\w\9\e\h\w\2\k\o\e\b\v\h\u\x\g\k\c\w\d\3\e\i\y\1\z\n\2\8\k\h\t\g\x\n\x\p\a\e\6\2\o\0\2\b\k\5\y\4\5\j\o\l\b\v\h\h\j\x\4\c\5\1\2\e\7\j\u\g\1\2\i\x\w\3\2\b\b\r\m\9\i\v\1\6\g\x\u\q\w\z\8\g\3\s\l\d\j\r\h\c\b\q\m\c\u\k\9\g\s\6\v\1\n\f\7\d\5\l\t\u\6\u\m\l\q\d\p\z\g\c\f\5\e\7\n\a\0\q\u\u\j\4\d\0\r\5\e\4\t\p\d\v\7\v\1\g\6\o\x\c\w\5\1\6\8\c\z\a\f\g\9\z\t\3\1\t\h\p\c\4\b\1\3\i\8\l\8\w\a\y\s\e\1\d\f\c\o\e\i\v\l\r\z\b\z\4\c\p\h\1\k\8\0\a\u\f\r\0\8\x\w\o\y\5\j\t\2\5\5\1\w\7\v\w\t\3\q\f\p\4\9\i\b\t\m\9\b\7\i\t\e\j\k\w\t\7\b\v\e\8\d\x\s\l\v\t\h\y\n\w\g\y\t\s\t\u\7\o\x\7\3\2\s\3\y\3\a\i\l\y\8\k\0\t\d\j\b\3\w\z\i\j\2\s\5\a\r\1\v\g\t\p\w\w\b\q\p\s\c\s\c\g\w\6\6\a\r\p\r\k\7\8\b\v\x\f\2\0\5\g\q\3\t\c\8\g\p\l\c\h\o\g\6\0\y\2\1\e\t\k\d\i\i\e\a\7\h\t\n\m\x\b\u\m\o\5\p\h\p\u\3\v\d\q\y\k\5\a\a\o\2\4\x\l\4\e\d\d\w\f\z\n\t\v\f\0\4\t\f\m\0\g\6\4\v\0\n\t\2\x\1\4\z\p\k\l\0\5\5\k\e\5\z\0\x\e\w\y\4\d\9\t\3\b\s\i\o\p\x\e\x\g\d\m\c\q\7\v\b\1\m\l\q\t\2\4\w\2\t\g\b\o\y\o\0\o\k\2\o\f\e\t\4\w\p\9\p\r\6\f\f\f\g\w\3\u\r\6\7\t\7\g\n\a\x\b\z\d\6\t\e\m\s\4\5\v\1\p\e\p\f\w\o\4\p\b\m\n\i\y\9\8\a\c\z\1\6\b\0\w\h\g\e\i\z\d\l\4\8\s\s\4\a\l\k\q\w\k\9\6\4\5\m\r\y\m\c\p\x\6\e\d\m\v\u\4\j\a\t\5\w\d\u\e\q\s\6\j\v\h\w\5\a\1\c\4\z\z\v\l\4\r\m\5\1\x\6\q\s\n\f\4\w\9\7\v\e\f\p\a\6\l\o\x\0\9\8\4\8\d\z\3\4\n\d\y\2\j\d\a\5\c\h\z\w\i\y\8\y\z\6\y\b\1\k\i\b\w\w\2\u\t\d\x\o\x\w\5\a\v\t\x\m\6\z\9\j\i\g\s\0\g\d\4\4\q\o\r\s\1\e\8\a\4\o\2\9\p\u\s\s\v\m\6\e\6\i\v\r\g\m\r\t\8\l\8\m\n\8\t\9\x\1\e\i\g\i\q\7\p\k\j\d\p\g\u\p\z\n\9\0\n\m\7\q\8\y\x\p\z\5\8\4\m\p\p\c\3\h\u\d\4\1\j\z\o\o\d\x\d\e\h\v\3\t\z\b\m\k\2\n\w\7\i\6\7\n\h\4\5\p\f\w\r\9\d\j\v\o\2\d\5\h\u\q\o\m\f\4\d\f\6\8\3\0\o\j\m\y\p\i\i\n\b\3\7\0\u\7\z\s\e\w\k\m\y\d\w\q\f\l\1\4\5\7\1\p\0\v\h\2\7\8\x\8\l\a\z\f\t\d\d\y\c\c\w\i\a\h\c\0\3\e\6\7\1\p\i\c\c\j\j\o\2\1\y\1\6\6\o\6\h\q\o\6\f\o\i\k\m\6\o\2\c\f\v\2\a\h\6\e\5\b\q\n\4\w\m\b\c\0\3\5\a\a\e\r\u\w\h\4\v\t\n\z\2\i\n\u\4\0\4\w\c\i\6\1\x\l\h\w\g\u\t\5\1\d\z\8\h\9\2\4\4\7\u\6\x\k\8\y\8\6\s\l\6\c\s\m\n\r\t\m\o\e\r\c\k\a\a\o\j\q\8\j\5\5\2\k\a\h\4\2\m\l\c\p\r\s\a\5\2\t\i\k\5\n\l\o\m\j\p\8\y\c\p\7\p\a\p\d\i\2\1\u\w\c\m\l\x\m\d\f\3\p\r\i\l\9\4\q\o\z\k\0\y\8\6\y\9\n\q\9\3\y\3\7\9\x\5\2\d\i\g\e\u\9\r\2\j\m\c\q\o\y\e\g\e\j\k\t\c\r\l\5\i\v\j\d\9\h\r\n\5\r\1\t\q\3\i\5\r\d\z\q\g\4\e\2\v\l\n\4\l\f\1\w\h\t\d\8\2\1\m\d\p\0\s\4\b\k\o\z\8\t\t\q\3\b\v\d\q\n\6\b\t\o\r\v\l\6\p\5\8\d\i\9\l\3\d\x\n\m\5\w\k\r\d\l\a\n\g\3\f\g\l\8\a\2\4\l\z\i\u\j\5\z\3\i\8\n\y\w\6\l\w\o\4\i\u\3\h\7\h\6\n\o\4\g\r\5\4\x\m\o\t\p\d\e\i\l\k\4\3\f\1\3\8\k\n\w\h\9\p\s\g\o\b\c\p\e\9\y\0\6\e\8\q\v\0\2\y\s\j\a\c\o\j\i\c\4\9\a\7\3\e\a\n\t\i\x\q\i\d\g\d\l\y\d\t\b\o\y\1\9\q\u\9\8\j\m\1\1\q\v\n\5\k\k\d\c\h\c\g\s\s\1\9\u\9\l\o\5\9\i\m\6\j\o\a\y\c\2\g\9\c\q\n\6\s\3\1\0\u\u\4\t\w\5\m\2\t\k\7\4\s\m\8\d\c\j\4\n\y\8\b\s\n\q\x\2\o\a\t\6\x\i\y\n\5\k\u\s\9\1\n\t\8\s\3\q\y\w\8\q\i\f\7\y\4\v\y\g\q\m\m\6\z\1\0\e\4\a\o\y\u\4\5\q\z\e\g\y\m\c\b\y\n\u\n\i\w\u\j\s\f\a\i\s\j\7\6\h\3\q\7\y\c\k\9\g\7\x\n\e\l\7\z\y\1\f\r\g\y\m\z\e\5\a\1\e\r\v\x\g\d\b\3\h\9\t\0\u\i\z\x\n\w\g\r\g\9\1\i\3\q\8\i\n\b\0\9\o\v\k\8\l\e\v\h\2\4\o\5\k\p\m\b\4\v\r\3\7\7\k\5\u\h\4\u\h\1\3\g\9\1\5\e\v\s\j\g\j\k\l\7\u\5\7\x\c\4\6\z\v\y\8\9\j\5\j\b\2\t\n\a\m\s\u\q\5\1\9\1\f\7\8\e\j\7\b\j\7\a\3\s\5\2\a\e\w\m\6\6\l\a\c\r\0\z\6\x\y\5\j\c\h\g\v\x\6\r\i\4\g\q\v\p\c\e\d\3\w\4\p\m\4\v\8\r\r\u\y\d\7\h\g\q\j\i\l\d\b\s\n\5\m\f\s\t\3\6\8\9\0\r\l\w\y\i\f\u\t\2\7\8\l\k\k\q\0\q\g\5\h\1\n\u\x\6\s\w\t\6\j\2\r\8\m\n\9\i\c\u\n\g\2\i\0\c\7\q\b\z\2\d\3\a\c\x\z\z\a\r\j\2\f\w\9\k\j\l\5\6\e\1\j\8\n\p\0\r\x\0\s\5\2\u\0\r\x\t\x\t\8\d\u\z\o\r\7\5\w\s\9\h\g\1\u\e\y\x\l\4\m\8\t\j\w\1\o\l\f\l\b\g\q\l\0\9\d\1\k\q\0\z\g\p\h\9\o\d\c\v\1\k\8\o\g\p\0\r\k\6\n\r\c\w\6\u\s\c\i\g\e\2\n\l\c\g\q\e\w\m\0\1\0\p\k\1\t\d\x\r\5\d\r\m\8\t\o\2\k\p\2\j\6\v\e\n\r\x\j\8\2\2\w\r\o\r\b\0\6\v\k\v\x\8\c\n\e\c\r\1\w\5\u\e\7\y\h\h\t\8\3\1\h\o\a\l\j\e\t\l\7\t\6\c\t\e\r\h\g\e\5\i\q\i\t\p\n\t\7\h\u\a\c\b\b\6\8\x\1\m\x\z\i\o\5\l\k\t\u\u\c\q\8\g\m\h\1\8\n\i\q\n\6\l\g\m\p\1\0\v\0\l\a\b\8\9\q\m\h\u\i\0\o\9\z\p\w\k\7\7\u\2\0\v\h\r\y\t\8\k\t\r\7\a\s\c\o\5\o\8\m\7\d\s\t\3\u\0\z\1\k\4\u\r\6\w\p\v\w\8\c\8\a\z\r\i\2\g\x\2\0\n\m\v\i\o\l\w\z\8\0\y\o\a\s\6\r\6\g\g\d\e\4\d\7\5\n\0\4\g\b\3\5\1\v\6\v\f\6\5\d\h\l\j\5\y\8\y\6\k\t\r\7\g\m\e\2\g\s\8\o\f\t\i\l\p\9\0\c\d\s\v\h\j\y\9\e\3\t\t\g\2\4\9\m\3\a\6\m\0\v\9\b\t\h\x\d\o\u\2\x\n\q\8\4\x\y\c\n\k\x\e\6\3\5\q\2\2\o\k\p\i\5\t\5\7\u\q\r\x\t\f\6\r\d\3\x\9\c\1\l\v\f\0\u\p\w\j\g\s\q\p\k\k\a\j\s\1\p\p\x\5\2\m\v\1\l\v\r\o\2\7\i\f\i\5\9\z\4\8\b\1\2\d\1\5\9\4\7\r\1\j\i\k\d\q\m\z\4\u\4\v\s\w\i\e\v\k\z\f\t\9\2\m\1\t\d\e\x\6\9\3\5\f\x\d\i\n\k\c\1\k\a\x\t\a\m\o\d\r\v\u\c\x\d\z\6\m\d\a\o\5\b\8\a\4\i\m\p\s\s\s\4\k\1\s\m\6\9\z\1\q\6\0\y\c\o\o\c\s\r\t\d\s\f\8\b\1\m\9\3\e\6\i\9\k\0\r\1\u\v\7\6\i\3\h\9\c\g\z\j\z\4\7\d\6\k\4\p\9\l\t\f\i\r\e\7\s\b\z\b\g\c\0\m\2\w\w\w\h\t\n\s\0\t\y\e\5\e\1\a\3\f\1\r\3\x\u\m\c\q\i\c\r\e\l\l\a\t\d\3\1\q\n\o\h\v\6\2\s\y\3\j\b\g\i\5\q\a\g\d\c\p\b\n\t\9\x\6\y\l\o\6\0\d\c\e\m\j\2\s\9\k\q\t\l\b\g\a\z\z\v\r\h\c\y\j\v\n\b\l\u\r\n\d\a\h\i\d\a\p\9\0\u\w\b\t\a\2\o\x\n\h\y\9\b\w\1\7\j\e\h\3\r\f\p\r\7\m\v\c\p\d\v\8\3\6\i\4\0\g\9\0\8\u\t\e\k\2\5\r\0\f\m\z\l\k\l\h\o\h\b\5\g\c\v\2\a\r\a\8\y\l\7\l\i\p\c\y\t\l\z\b\g\u\g\q\z\d\4\l\a\u\l\4\1\c\c\e\e\l\u\r\5\1\l\5\w\8\9\b\5\u\3\x\f\j\n\k\m\9\0\s\c\s\x\a\f\4\s\p\y\7\b\e\u\q\1\h\8\9\h\1\7\h\2\j\q\p\o\p\w\l\m\2\8\c\2\z\5\r\b\8\v\7\g\8\w\y\a\4\3\j\0\h\x\0\4\i\7\p\y\3\o\5\u\p\1\0\v\i\x\q\z\b\k\9\1\v\1\n\p\e\p\h\i\n\b\q\9\x\2\4\q\s\0\f\u\r\2\5\8\e\2\f\z\n\8\t\y\x\p\y\b\n\b\o\j\3\w\h\b\7\j\g\l\v\n\p\w\w\c\k\l\i\g\1\y\e\d\o\2\e\j\y\8\z\s\4\i\z\m\7\t\w\2\k\j\x\8\1\8\v\l\m\1\g\i\w\n\8\4\5\c\f\5\4\x\o\w\h\d\i\y\w\0\a\4\7\4\q\a\a\w\5\t\q\3\c\p\7\h\5\m\r\a\f\5\0\z\c\d\f\s\b\t\q\e\4\n\b\t\g\1\2\3\9\3\m\b\7\m\0\t\3\9\2\t\p\2\q\b\l\e\0\e\7\1\3\6\u\g\c\i\7\s\f\b\6\u\z\s\k\y\8\s\x\g\f\b\p\h\j\q\e\x\l\0\l\h\7\y\n\t\m\0\1\c\p\l\e\w\s\f\4\m\k\u\e\p\t\c\y\x\z\9\2\r\6\s\j\f\0\9\1\q\w\u\0\m\j\r\k\b\r\a\6\y\y\2\u\7\p\m\x\7\n\n\r\6\i\t\n\s\q\u\n\i\c\t\2\t\3\h\y\c\e\m\o\f\b\x\r\q\e\w\0\a\x\n\i\6\l\7\r\s\p\z\c\q\y\u\9\j\e\k\o\y\m\b\q\b\h\t\5\1\5\x\1\l\r\k\8\f\j\r\3\q\0\i\u\m\q\p\u\p\j\x\h\7\2\3\2\l\g\m\r\f\3\o\s\a\v\h\4\g\h\7\0\l\p\o\1\6\b\n\g\4\e\y\k\h\f\g\l\z\1\z\t\y\r\5\n\s\i\p\o\p\z\4\3\x\9\6\4\p\l\m\z\j\l\d\c\1\r\g\z\x\r\t\6\n\o\x\j\a\k\6\n\g\p\1\i\e\a\a\a\3\5\0\s\x\j\q\v\u\s\z\e\7\8\h\x\c\p\t\j\1\m\h\n\x\5\p\i\t\i\e\a\r\n\t\3\q\f\j\j\m\q\2\6\3\e\a\z\g\b\j\7\x\i\f\z\5\f\2\a\n\u\d\j\q\8\b\3\1\0\y\b\u\2\u\a\y\j\0\n\7\1\t\m\1\i\0\z\l\3\g\l\3\b\h\4\y\n\u\r\7\y\j\u\g\v\b\f\s\c\l\c\3\i\u\w\d\v\8\x\6\d\e\i\d\y\l\u\i\x\k\z\9\o\6\0\8\c\5\6\e\t\j\i\n\i\w\p\p\i\q\6\s\w\t\r\x\6\2\2\8\j\4\r\j\k\5\c\w\t\w\5\1\j\m\z\u\r\u\l\9\f\0\5\v\k\z\q\l\4\3\1\l\0\i\o\m\7\b\f\d\k\z\j\l\0\u\2\2\w\m\e\6\5\5\a\s\c\t\1\m\e\q\1\y\z\n\u\g\0\u\6\e\a\y\q\k\b\h\s\w\z\p\d\n\f\g\e\x\k\i\8\1\t\l\k\x\r\5\8\k\t\i\5\i\7\n\p\h\1\5\6\m\g\u\k\4\a\y\p\i\x\p\9\b\7\l\k\n\2\0\v\i\3\3\j\b\4\g\i\v\b\e\w\f\j\4\a\s\o\h\0\p\2\k\g\u\q\k\8\q\5\p\a\o\o\l\1\z\b\g\n\i\g\6\3\r\b\x\2\u\r\p\v\r\h\8\i\7\k\s\h\c\e\9\t\z\s\z\c\j\u\e\n\e\2\q\6\0\s\g\y\l\6\t\r\p\l\u\s\9\k\s\r\r\8\a\3\a\i\4\w\a\4\3\n\0\v\v\3\x\b\f\w\t\m\l\u\w\z\e\n\t\u\y\9\k\4\m\l\4\s\f\f\6\h\7\q\7\1\t\3\4\7\v\w\l\u\9\l\j\2\p\u\2\0\5\9\c\1\1\u\i\k\n\3\m\e\y\1\6\t\j\9\6\v\e\9\s\k\d\x\j\q\w\f\u\f\r\q\z\2\a\j\r\w\r\n\g\r\y\x\8\0\g\t\p\0\x\m\k\x\v\8\y\d\3\m\7\n\0\3\1\o\i\y\z\e\q\w\r\z\h\d\3\q\2\z\g\n\6\9\m\v\r\g\3\2\t\h\k\w\q\p\8\l\r\h\9\u\l\f\r\y\j\y\b\1\u\a\t\s\q\u\1\n\g\4\q\b\7\9\7\l\6\q\n\w\l\d\p\3\n\b\e\g\u\0\9\w\t\g\f\r\8\g\7\5\m\y\y\9\f\q\8\y\h\h\p\1\f\h\a\3\q\n\q\m\u\n\n\v\u\y\2\p\u\d\6\l\k\o\d\f\f\z\r\l\r\d\b\h\s\j\2\v\1\u\k\v\8\s\1\8\0\7\d\g\s\u\6\8\d\e\8\4\c\z\8\9\3\i\b\q\v\b\8\e\e\1\3\s\d\n\n\y\y\v\5\4\s\j\o\c\d\m\t\p\4 ]] 00:13:52.686 00:13:52.686 real 0m1.219s 00:13:52.686 user 0m0.794s 00:13:52.686 sys 0m0.537s 00:13:52.686 13:32:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:52.686 13:32:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:13:52.686 ************************************ 00:13:52.686 13:32:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:13:52.686 13:32:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:13:52.686 13:32:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:13:52.686 13:32:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:13:52.686 13:32:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:13:52.686 13:32:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:13:52.686 13:32:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:13:52.687 13:32:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:13:52.687 13:32:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:13:52.687 13:32:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:13:52.687 13:32:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:13:52.944 [2024-05-15 13:32:05.806145] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:52.944 [2024-05-15 13:32:05.806296] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75987 ] 00:13:52.944 { 00:13:52.944 "subsystems": [ 00:13:52.944 { 00:13:52.944 "subsystem": "bdev", 00:13:52.944 "config": [ 00:13:52.944 { 00:13:52.944 "params": { 00:13:52.944 "trtype": "pcie", 00:13:52.944 "traddr": "0000:00:10.0", 00:13:52.944 "name": "Nvme0" 00:13:52.944 }, 00:13:52.944 "method": "bdev_nvme_attach_controller" 00:13:52.944 }, 00:13:52.944 { 00:13:52.944 "method": "bdev_wait_for_examine" 00:13:52.944 } 00:13:52.944 ] 00:13:52.944 } 00:13:52.944 ] 00:13:52.944 } 00:13:52.944 [2024-05-15 13:32:05.934815] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:52.944 [2024-05-15 13:32:05.952427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.944 [2024-05-15 13:32:06.005826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.460  Copying: 1024/1024 [kB] (average 1000 MBps) 00:13:53.460 00:13:53.460 13:32:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:53.460 00:13:53.460 real 0m16.974s 00:13:53.460 user 0m11.614s 00:13:53.460 sys 0m6.333s 00:13:53.460 13:32:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:53.460 13:32:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:13:53.460 ************************************ 00:13:53.460 END TEST spdk_dd_basic_rw 00:13:53.460 ************************************ 00:13:53.460 13:32:06 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:13:53.460 13:32:06 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:53.460 13:32:06 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:53.460 13:32:06 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:13:53.460 ************************************ 00:13:53.460 START TEST spdk_dd_posix 00:13:53.460 ************************************ 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:13:53.460 * Looking for test storage... 00:13:53.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:13:53.460 * First test run, liburing in use 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:13:53.460 ************************************ 00:13:53.460 START TEST dd_flag_append 00:13:53.460 ************************************ 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1121 -- # append 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=fvzwvwswcchumfplrj95eub6zzevu9g1 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=41ccgsi9y758awp55ywzteljbv6i9bir 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s fvzwvwswcchumfplrj95eub6zzevu9g1 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 41ccgsi9y758awp55ywzteljbv6i9bir 00:13:53.460 13:32:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:13:53.460 [2024-05-15 13:32:06.539420] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:53.460 [2024-05-15 13:32:06.539512] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76051 ] 00:13:53.718 [2024-05-15 13:32:06.659022] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:53.718 [2024-05-15 13:32:06.677451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.718 [2024-05-15 13:32:06.729722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.976  Copying: 32/32 [B] (average 31 kBps) 00:13:53.976 00:13:53.976 13:32:06 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 41ccgsi9y758awp55ywzteljbv6i9birfvzwvwswcchumfplrj95eub6zzevu9g1 == \4\1\c\c\g\s\i\9\y\7\5\8\a\w\p\5\5\y\w\z\t\e\l\j\b\v\6\i\9\b\i\r\f\v\z\w\v\w\s\w\c\c\h\u\m\f\p\l\r\j\9\5\e\u\b\6\z\z\e\v\u\9\g\1 ]] 00:13:53.976 00:13:53.976 real 0m0.481s 00:13:53.976 user 0m0.252s 00:13:53.976 sys 0m0.225s 00:13:53.976 13:32:06 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:53.976 ************************************ 00:13:53.976 END TEST dd_flag_append 00:13:53.977 13:32:06 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:13:53.977 ************************************ 00:13:53.977 13:32:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:13:53.977 13:32:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:53.977 13:32:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:53.977 13:32:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:13:53.977 ************************************ 00:13:53.977 START TEST dd_flag_directory 00:13:53.977 ************************************ 00:13:53.977 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1121 -- # directory 00:13:53.977 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:53.977 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:13:53.977 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:53.977 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:53.977 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:53.977 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:53.977 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:53.977 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:53.977 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:53.977 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:53.977 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:53.977 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:53.977 [2024-05-15 13:32:07.067798] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:53.977 [2024-05-15 13:32:07.068391] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76079 ] 00:13:54.242 [2024-05-15 13:32:07.188750] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:54.242 [2024-05-15 13:32:07.204163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.242 [2024-05-15 13:32:07.266365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.242 [2024-05-15 13:32:07.333543] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:13:54.242 [2024-05-15 13:32:07.333630] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:13:54.242 [2024-05-15 13:32:07.333652] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:54.501 [2024-05-15 13:32:07.431576] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:13:54.501 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:13:54.501 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:54.501 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:13:54.501 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:13:54.501 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:13:54.501 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:54.501 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:13:54.501 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:13:54.501 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:13:54.501 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:54.501 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:54.501 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:54.501 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:54.501 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:54.501 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:54.501 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:54.501 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:54.501 13:32:07 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:13:54.501 [2024-05-15 13:32:07.579041] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:54.501 [2024-05-15 13:32:07.579201] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76089 ] 00:13:54.759 [2024-05-15 13:32:07.711401] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:54.759 [2024-05-15 13:32:07.730233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.760 [2024-05-15 13:32:07.782760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.760 [2024-05-15 13:32:07.852433] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:13:54.760 [2024-05-15 13:32:07.852497] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:13:54.760 [2024-05-15 13:32:07.852512] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:55.019 [2024-05-15 13:32:07.944644] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:55.019 00:13:55.019 real 0m1.005s 00:13:55.019 user 0m0.509s 00:13:55.019 sys 0m0.284s 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:55.019 ************************************ 00:13:55.019 END TEST dd_flag_directory 00:13:55.019 ************************************ 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:13:55.019 ************************************ 00:13:55.019 START TEST dd_flag_nofollow 00:13:55.019 ************************************ 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1121 -- # nofollow 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:55.019 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:55.278 [2024-05-15 13:32:08.124935] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:55.278 [2024-05-15 13:32:08.125492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76116 ] 00:13:55.278 [2024-05-15 13:32:08.245684] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:55.278 [2024-05-15 13:32:08.261124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.278 [2024-05-15 13:32:08.343202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.537 [2024-05-15 13:32:08.419738] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:13:55.537 [2024-05-15 13:32:08.419800] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:13:55.537 [2024-05-15 13:32:08.419816] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:55.537 [2024-05-15 13:32:08.512664] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:13:55.537 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:13:55.537 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:55.537 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:13:55.537 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:13:55.537 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:13:55.537 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:55.537 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:13:55.537 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:13:55.537 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:13:55.537 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:55.537 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:55.537 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:55.537 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:55.537 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:55.537 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:55.537 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:13:55.537 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:13:55.537 13:32:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:13:55.795 [2024-05-15 13:32:08.652883] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:55.795 [2024-05-15 13:32:08.652991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76127 ] 00:13:55.795 [2024-05-15 13:32:08.783664] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:55.795 [2024-05-15 13:32:08.799838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.795 [2024-05-15 13:32:08.853681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.053 [2024-05-15 13:32:08.921442] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:13:56.053 [2024-05-15 13:32:08.921504] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:13:56.053 [2024-05-15 13:32:08.921521] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:56.053 [2024-05-15 13:32:09.026506] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:13:56.053 13:32:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:13:56.053 13:32:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:56.053 13:32:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:13:56.053 13:32:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:13:56.053 13:32:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:13:56.053 13:32:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:56.053 13:32:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:13:56.053 13:32:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:13:56.053 13:32:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:13:56.053 13:32:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:56.320 [2024-05-15 13:32:09.178053] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:56.321 [2024-05-15 13:32:09.178165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76133 ] 00:13:56.321 [2024-05-15 13:32:09.304063] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:56.321 [2024-05-15 13:32:09.323350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.321 [2024-05-15 13:32:09.380011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.588  Copying: 512/512 [B] (average 500 kBps) 00:13:56.588 00:13:56.588 13:32:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ njnnqih9y6macs9wp8a592pwp5lwh822ayxis6gzopu8fn8ig5471s7oyqsy257q447s6e81b6xxbxamdpg9hp0vxy59ab2t9tp2ajogkh9v39hr9et1km52lnqaqsz36p3ivma59357ph0fay0jag7yelxrbm74ww2s2a80o6b08gmkh3vaef48d2945vhdhy6yc9qt2env05e0yduwxct33shsk6zdujphkdjfxf5fbss5h6ls67ac7vbmogh9bthqn66jfv3wvgqma4qz76glzikr0h57y1x6fi71yhthmebmivesvinmbu2p2uavqtj4jhd0cxh0efl2tvhrqxm21pz14u0r18jg1b1jtz8r2rq9382s5eitr2xe41w2qrmbbovzlds1nmvb07uohs0hv2vzq8pafgp7tvdgeksw8jdgjpgc43zj4vlaswq9h2rgf3y9bed5fee092vsmchhftoxof46d1i3e4pudykdxr26g6aabjczqjzjvjgi == \n\j\n\n\q\i\h\9\y\6\m\a\c\s\9\w\p\8\a\5\9\2\p\w\p\5\l\w\h\8\2\2\a\y\x\i\s\6\g\z\o\p\u\8\f\n\8\i\g\5\4\7\1\s\7\o\y\q\s\y\2\5\7\q\4\4\7\s\6\e\8\1\b\6\x\x\b\x\a\m\d\p\g\9\h\p\0\v\x\y\5\9\a\b\2\t\9\t\p\2\a\j\o\g\k\h\9\v\3\9\h\r\9\e\t\1\k\m\5\2\l\n\q\a\q\s\z\3\6\p\3\i\v\m\a\5\9\3\5\7\p\h\0\f\a\y\0\j\a\g\7\y\e\l\x\r\b\m\7\4\w\w\2\s\2\a\8\0\o\6\b\0\8\g\m\k\h\3\v\a\e\f\4\8\d\2\9\4\5\v\h\d\h\y\6\y\c\9\q\t\2\e\n\v\0\5\e\0\y\d\u\w\x\c\t\3\3\s\h\s\k\6\z\d\u\j\p\h\k\d\j\f\x\f\5\f\b\s\s\5\h\6\l\s\6\7\a\c\7\v\b\m\o\g\h\9\b\t\h\q\n\6\6\j\f\v\3\w\v\g\q\m\a\4\q\z\7\6\g\l\z\i\k\r\0\h\5\7\y\1\x\6\f\i\7\1\y\h\t\h\m\e\b\m\i\v\e\s\v\i\n\m\b\u\2\p\2\u\a\v\q\t\j\4\j\h\d\0\c\x\h\0\e\f\l\2\t\v\h\r\q\x\m\2\1\p\z\1\4\u\0\r\1\8\j\g\1\b\1\j\t\z\8\r\2\r\q\9\3\8\2\s\5\e\i\t\r\2\x\e\4\1\w\2\q\r\m\b\b\o\v\z\l\d\s\1\n\m\v\b\0\7\u\o\h\s\0\h\v\2\v\z\q\8\p\a\f\g\p\7\t\v\d\g\e\k\s\w\8\j\d\g\j\p\g\c\4\3\z\j\4\v\l\a\s\w\q\9\h\2\r\g\f\3\y\9\b\e\d\5\f\e\e\0\9\2\v\s\m\c\h\h\f\t\o\x\o\f\4\6\d\1\i\3\e\4\p\u\d\y\k\d\x\r\2\6\g\6\a\a\b\j\c\z\q\j\z\j\v\j\g\i ]] 00:13:56.588 00:13:56.588 real 0m1.589s 00:13:56.588 user 0m0.832s 00:13:56.588 sys 0m0.551s 00:13:56.588 ************************************ 00:13:56.588 END TEST dd_flag_nofollow 00:13:56.588 ************************************ 00:13:56.588 13:32:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:56.588 13:32:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:13:56.845 13:32:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:13:56.845 13:32:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:56.845 13:32:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:56.845 13:32:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:13:56.845 ************************************ 00:13:56.845 START TEST dd_flag_noatime 00:13:56.845 ************************************ 00:13:56.845 13:32:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1121 -- # noatime 00:13:56.845 13:32:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:13:56.845 13:32:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:13:56.845 13:32:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:13:56.845 13:32:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:13:56.845 13:32:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:13:56.845 13:32:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:56.845 13:32:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1715779929 00:13:56.845 13:32:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:56.845 13:32:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1715779929 00:13:56.845 13:32:09 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:13:57.776 13:32:10 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:57.776 [2024-05-15 13:32:10.785627] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:57.776 [2024-05-15 13:32:10.786321] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76177 ] 00:13:58.034 [2024-05-15 13:32:10.914476] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:58.034 [2024-05-15 13:32:10.928707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.034 [2024-05-15 13:32:11.006818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.292  Copying: 512/512 [B] (average 500 kBps) 00:13:58.292 00:13:58.292 13:32:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:58.292 13:32:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1715779929 )) 00:13:58.292 13:32:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:58.292 13:32:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1715779929 )) 00:13:58.292 13:32:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:13:58.292 [2024-05-15 13:32:11.331066] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:58.292 [2024-05-15 13:32:11.331206] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76189 ] 00:13:58.549 [2024-05-15 13:32:11.463149] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:58.549 [2024-05-15 13:32:11.476707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.549 [2024-05-15 13:32:11.534878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.807  Copying: 512/512 [B] (average 500 kBps) 00:13:58.807 00:13:58.807 13:32:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:13:58.807 13:32:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1715779931 )) 00:13:58.807 00:13:58.807 real 0m2.077s 00:13:58.807 user 0m0.546s 00:13:58.807 sys 0m0.523s 00:13:58.807 13:32:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:58.807 ************************************ 00:13:58.807 13:32:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:13:58.807 END TEST dd_flag_noatime 00:13:58.807 ************************************ 00:13:58.807 13:32:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:13:58.807 13:32:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:58.807 13:32:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:58.807 13:32:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:13:58.807 ************************************ 00:13:58.807 START TEST dd_flags_misc 00:13:58.807 ************************************ 00:13:58.807 13:32:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1121 -- # io 00:13:58.807 13:32:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:13:58.807 13:32:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:13:58.807 13:32:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:13:58.807 13:32:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:13:58.807 13:32:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:13:58.807 13:32:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:13:58.807 13:32:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:13:58.807 13:32:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:58.807 13:32:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:13:58.807 [2024-05-15 13:32:11.885416] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:58.807 [2024-05-15 13:32:11.885506] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76219 ] 00:13:59.086 [2024-05-15 13:32:12.006908] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:59.086 [2024-05-15 13:32:12.020302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.086 [2024-05-15 13:32:12.086276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.343  Copying: 512/512 [B] (average 500 kBps) 00:13:59.343 00:13:59.344 13:32:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ kfo3capr3bd6t2zs2hvbl7glnyn204ekksiueyrfrcgsbao78rjdkkg8lfh14i304xcyisal9uj82lj8gg4ntix2ydff5c9dvehuj4rprjz0v7sdchmdat02f1egsy8nzov2knyuhxq5obdcben7d3docv0kbnwpvl62br6advoyyhwsdc41i7zto594wjmpmulg655e2h0yxfcgt402fc344dnyt2pnwkqwysgy65h778v96z8azz0du265vjrbzspo2jjtq2ecrcnec11eiek9653hbjm5xie6abq9nqnspzxlondn9ru6rvimrof12r5b9zcau16s260ohwqj2mv1h5n8qu2r4gf2btggrpybdsfhkjxrqe6bv993shpx1xvkbu46z9y6d7gwjeclzi4of04aqh621czaqpt1zo6pk87njtcp0c6dl64usu2quh2rzsn1kj4b5bdtdq7rlpg3c7sqmoeg63cv90hujyvbppzym346g96ec6yomlhp == \k\f\o\3\c\a\p\r\3\b\d\6\t\2\z\s\2\h\v\b\l\7\g\l\n\y\n\2\0\4\e\k\k\s\i\u\e\y\r\f\r\c\g\s\b\a\o\7\8\r\j\d\k\k\g\8\l\f\h\1\4\i\3\0\4\x\c\y\i\s\a\l\9\u\j\8\2\l\j\8\g\g\4\n\t\i\x\2\y\d\f\f\5\c\9\d\v\e\h\u\j\4\r\p\r\j\z\0\v\7\s\d\c\h\m\d\a\t\0\2\f\1\e\g\s\y\8\n\z\o\v\2\k\n\y\u\h\x\q\5\o\b\d\c\b\e\n\7\d\3\d\o\c\v\0\k\b\n\w\p\v\l\6\2\b\r\6\a\d\v\o\y\y\h\w\s\d\c\4\1\i\7\z\t\o\5\9\4\w\j\m\p\m\u\l\g\6\5\5\e\2\h\0\y\x\f\c\g\t\4\0\2\f\c\3\4\4\d\n\y\t\2\p\n\w\k\q\w\y\s\g\y\6\5\h\7\7\8\v\9\6\z\8\a\z\z\0\d\u\2\6\5\v\j\r\b\z\s\p\o\2\j\j\t\q\2\e\c\r\c\n\e\c\1\1\e\i\e\k\9\6\5\3\h\b\j\m\5\x\i\e\6\a\b\q\9\n\q\n\s\p\z\x\l\o\n\d\n\9\r\u\6\r\v\i\m\r\o\f\1\2\r\5\b\9\z\c\a\u\1\6\s\2\6\0\o\h\w\q\j\2\m\v\1\h\5\n\8\q\u\2\r\4\g\f\2\b\t\g\g\r\p\y\b\d\s\f\h\k\j\x\r\q\e\6\b\v\9\9\3\s\h\p\x\1\x\v\k\b\u\4\6\z\9\y\6\d\7\g\w\j\e\c\l\z\i\4\o\f\0\4\a\q\h\6\2\1\c\z\a\q\p\t\1\z\o\6\p\k\8\7\n\j\t\c\p\0\c\6\d\l\6\4\u\s\u\2\q\u\h\2\r\z\s\n\1\k\j\4\b\5\b\d\t\d\q\7\r\l\p\g\3\c\7\s\q\m\o\e\g\6\3\c\v\9\0\h\u\j\y\v\b\p\p\z\y\m\3\4\6\g\9\6\e\c\6\y\o\m\l\h\p ]] 00:13:59.344 13:32:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:59.344 13:32:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:13:59.344 [2024-05-15 13:32:12.374793] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:59.344 [2024-05-15 13:32:12.374881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76233 ] 00:13:59.601 [2024-05-15 13:32:12.500424] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:59.601 [2024-05-15 13:32:12.518369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.601 [2024-05-15 13:32:12.574438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.859  Copying: 512/512 [B] (average 500 kBps) 00:13:59.859 00:13:59.860 13:32:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ kfo3capr3bd6t2zs2hvbl7glnyn204ekksiueyrfrcgsbao78rjdkkg8lfh14i304xcyisal9uj82lj8gg4ntix2ydff5c9dvehuj4rprjz0v7sdchmdat02f1egsy8nzov2knyuhxq5obdcben7d3docv0kbnwpvl62br6advoyyhwsdc41i7zto594wjmpmulg655e2h0yxfcgt402fc344dnyt2pnwkqwysgy65h778v96z8azz0du265vjrbzspo2jjtq2ecrcnec11eiek9653hbjm5xie6abq9nqnspzxlondn9ru6rvimrof12r5b9zcau16s260ohwqj2mv1h5n8qu2r4gf2btggrpybdsfhkjxrqe6bv993shpx1xvkbu46z9y6d7gwjeclzi4of04aqh621czaqpt1zo6pk87njtcp0c6dl64usu2quh2rzsn1kj4b5bdtdq7rlpg3c7sqmoeg63cv90hujyvbppzym346g96ec6yomlhp == \k\f\o\3\c\a\p\r\3\b\d\6\t\2\z\s\2\h\v\b\l\7\g\l\n\y\n\2\0\4\e\k\k\s\i\u\e\y\r\f\r\c\g\s\b\a\o\7\8\r\j\d\k\k\g\8\l\f\h\1\4\i\3\0\4\x\c\y\i\s\a\l\9\u\j\8\2\l\j\8\g\g\4\n\t\i\x\2\y\d\f\f\5\c\9\d\v\e\h\u\j\4\r\p\r\j\z\0\v\7\s\d\c\h\m\d\a\t\0\2\f\1\e\g\s\y\8\n\z\o\v\2\k\n\y\u\h\x\q\5\o\b\d\c\b\e\n\7\d\3\d\o\c\v\0\k\b\n\w\p\v\l\6\2\b\r\6\a\d\v\o\y\y\h\w\s\d\c\4\1\i\7\z\t\o\5\9\4\w\j\m\p\m\u\l\g\6\5\5\e\2\h\0\y\x\f\c\g\t\4\0\2\f\c\3\4\4\d\n\y\t\2\p\n\w\k\q\w\y\s\g\y\6\5\h\7\7\8\v\9\6\z\8\a\z\z\0\d\u\2\6\5\v\j\r\b\z\s\p\o\2\j\j\t\q\2\e\c\r\c\n\e\c\1\1\e\i\e\k\9\6\5\3\h\b\j\m\5\x\i\e\6\a\b\q\9\n\q\n\s\p\z\x\l\o\n\d\n\9\r\u\6\r\v\i\m\r\o\f\1\2\r\5\b\9\z\c\a\u\1\6\s\2\6\0\o\h\w\q\j\2\m\v\1\h\5\n\8\q\u\2\r\4\g\f\2\b\t\g\g\r\p\y\b\d\s\f\h\k\j\x\r\q\e\6\b\v\9\9\3\s\h\p\x\1\x\v\k\b\u\4\6\z\9\y\6\d\7\g\w\j\e\c\l\z\i\4\o\f\0\4\a\q\h\6\2\1\c\z\a\q\p\t\1\z\o\6\p\k\8\7\n\j\t\c\p\0\c\6\d\l\6\4\u\s\u\2\q\u\h\2\r\z\s\n\1\k\j\4\b\5\b\d\t\d\q\7\r\l\p\g\3\c\7\s\q\m\o\e\g\6\3\c\v\9\0\h\u\j\y\v\b\p\p\z\y\m\3\4\6\g\9\6\e\c\6\y\o\m\l\h\p ]] 00:13:59.860 13:32:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:13:59.860 13:32:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:13:59.860 [2024-05-15 13:32:12.866275] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:59.860 [2024-05-15 13:32:12.866406] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76238 ] 00:14:00.118 [2024-05-15 13:32:12.989218] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:00.118 [2024-05-15 13:32:13.007841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.118 [2024-05-15 13:32:13.064460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.376  Copying: 512/512 [B] (average 83 kBps) 00:14:00.376 00:14:00.376 13:32:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ kfo3capr3bd6t2zs2hvbl7glnyn204ekksiueyrfrcgsbao78rjdkkg8lfh14i304xcyisal9uj82lj8gg4ntix2ydff5c9dvehuj4rprjz0v7sdchmdat02f1egsy8nzov2knyuhxq5obdcben7d3docv0kbnwpvl62br6advoyyhwsdc41i7zto594wjmpmulg655e2h0yxfcgt402fc344dnyt2pnwkqwysgy65h778v96z8azz0du265vjrbzspo2jjtq2ecrcnec11eiek9653hbjm5xie6abq9nqnspzxlondn9ru6rvimrof12r5b9zcau16s260ohwqj2mv1h5n8qu2r4gf2btggrpybdsfhkjxrqe6bv993shpx1xvkbu46z9y6d7gwjeclzi4of04aqh621czaqpt1zo6pk87njtcp0c6dl64usu2quh2rzsn1kj4b5bdtdq7rlpg3c7sqmoeg63cv90hujyvbppzym346g96ec6yomlhp == \k\f\o\3\c\a\p\r\3\b\d\6\t\2\z\s\2\h\v\b\l\7\g\l\n\y\n\2\0\4\e\k\k\s\i\u\e\y\r\f\r\c\g\s\b\a\o\7\8\r\j\d\k\k\g\8\l\f\h\1\4\i\3\0\4\x\c\y\i\s\a\l\9\u\j\8\2\l\j\8\g\g\4\n\t\i\x\2\y\d\f\f\5\c\9\d\v\e\h\u\j\4\r\p\r\j\z\0\v\7\s\d\c\h\m\d\a\t\0\2\f\1\e\g\s\y\8\n\z\o\v\2\k\n\y\u\h\x\q\5\o\b\d\c\b\e\n\7\d\3\d\o\c\v\0\k\b\n\w\p\v\l\6\2\b\r\6\a\d\v\o\y\y\h\w\s\d\c\4\1\i\7\z\t\o\5\9\4\w\j\m\p\m\u\l\g\6\5\5\e\2\h\0\y\x\f\c\g\t\4\0\2\f\c\3\4\4\d\n\y\t\2\p\n\w\k\q\w\y\s\g\y\6\5\h\7\7\8\v\9\6\z\8\a\z\z\0\d\u\2\6\5\v\j\r\b\z\s\p\o\2\j\j\t\q\2\e\c\r\c\n\e\c\1\1\e\i\e\k\9\6\5\3\h\b\j\m\5\x\i\e\6\a\b\q\9\n\q\n\s\p\z\x\l\o\n\d\n\9\r\u\6\r\v\i\m\r\o\f\1\2\r\5\b\9\z\c\a\u\1\6\s\2\6\0\o\h\w\q\j\2\m\v\1\h\5\n\8\q\u\2\r\4\g\f\2\b\t\g\g\r\p\y\b\d\s\f\h\k\j\x\r\q\e\6\b\v\9\9\3\s\h\p\x\1\x\v\k\b\u\4\6\z\9\y\6\d\7\g\w\j\e\c\l\z\i\4\o\f\0\4\a\q\h\6\2\1\c\z\a\q\p\t\1\z\o\6\p\k\8\7\n\j\t\c\p\0\c\6\d\l\6\4\u\s\u\2\q\u\h\2\r\z\s\n\1\k\j\4\b\5\b\d\t\d\q\7\r\l\p\g\3\c\7\s\q\m\o\e\g\6\3\c\v\9\0\h\u\j\y\v\b\p\p\z\y\m\3\4\6\g\9\6\e\c\6\y\o\m\l\h\p ]] 00:14:00.376 13:32:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:14:00.376 13:32:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:14:00.376 [2024-05-15 13:32:13.364677] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:00.376 [2024-05-15 13:32:13.364767] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76243 ] 00:14:00.633 [2024-05-15 13:32:13.485670] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:00.633 [2024-05-15 13:32:13.501615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.633 [2024-05-15 13:32:13.554550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.891  Copying: 512/512 [B] (average 500 kBps) 00:14:00.891 00:14:00.892 13:32:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ kfo3capr3bd6t2zs2hvbl7glnyn204ekksiueyrfrcgsbao78rjdkkg8lfh14i304xcyisal9uj82lj8gg4ntix2ydff5c9dvehuj4rprjz0v7sdchmdat02f1egsy8nzov2knyuhxq5obdcben7d3docv0kbnwpvl62br6advoyyhwsdc41i7zto594wjmpmulg655e2h0yxfcgt402fc344dnyt2pnwkqwysgy65h778v96z8azz0du265vjrbzspo2jjtq2ecrcnec11eiek9653hbjm5xie6abq9nqnspzxlondn9ru6rvimrof12r5b9zcau16s260ohwqj2mv1h5n8qu2r4gf2btggrpybdsfhkjxrqe6bv993shpx1xvkbu46z9y6d7gwjeclzi4of04aqh621czaqpt1zo6pk87njtcp0c6dl64usu2quh2rzsn1kj4b5bdtdq7rlpg3c7sqmoeg63cv90hujyvbppzym346g96ec6yomlhp == \k\f\o\3\c\a\p\r\3\b\d\6\t\2\z\s\2\h\v\b\l\7\g\l\n\y\n\2\0\4\e\k\k\s\i\u\e\y\r\f\r\c\g\s\b\a\o\7\8\r\j\d\k\k\g\8\l\f\h\1\4\i\3\0\4\x\c\y\i\s\a\l\9\u\j\8\2\l\j\8\g\g\4\n\t\i\x\2\y\d\f\f\5\c\9\d\v\e\h\u\j\4\r\p\r\j\z\0\v\7\s\d\c\h\m\d\a\t\0\2\f\1\e\g\s\y\8\n\z\o\v\2\k\n\y\u\h\x\q\5\o\b\d\c\b\e\n\7\d\3\d\o\c\v\0\k\b\n\w\p\v\l\6\2\b\r\6\a\d\v\o\y\y\h\w\s\d\c\4\1\i\7\z\t\o\5\9\4\w\j\m\p\m\u\l\g\6\5\5\e\2\h\0\y\x\f\c\g\t\4\0\2\f\c\3\4\4\d\n\y\t\2\p\n\w\k\q\w\y\s\g\y\6\5\h\7\7\8\v\9\6\z\8\a\z\z\0\d\u\2\6\5\v\j\r\b\z\s\p\o\2\j\j\t\q\2\e\c\r\c\n\e\c\1\1\e\i\e\k\9\6\5\3\h\b\j\m\5\x\i\e\6\a\b\q\9\n\q\n\s\p\z\x\l\o\n\d\n\9\r\u\6\r\v\i\m\r\o\f\1\2\r\5\b\9\z\c\a\u\1\6\s\2\6\0\o\h\w\q\j\2\m\v\1\h\5\n\8\q\u\2\r\4\g\f\2\b\t\g\g\r\p\y\b\d\s\f\h\k\j\x\r\q\e\6\b\v\9\9\3\s\h\p\x\1\x\v\k\b\u\4\6\z\9\y\6\d\7\g\w\j\e\c\l\z\i\4\o\f\0\4\a\q\h\6\2\1\c\z\a\q\p\t\1\z\o\6\p\k\8\7\n\j\t\c\p\0\c\6\d\l\6\4\u\s\u\2\q\u\h\2\r\z\s\n\1\k\j\4\b\5\b\d\t\d\q\7\r\l\p\g\3\c\7\s\q\m\o\e\g\6\3\c\v\9\0\h\u\j\y\v\b\p\p\z\y\m\3\4\6\g\9\6\e\c\6\y\o\m\l\h\p ]] 00:14:00.892 13:32:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:14:00.892 13:32:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:14:00.892 13:32:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:14:00.892 13:32:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:14:00.892 13:32:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:14:00.892 13:32:13 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:14:00.892 [2024-05-15 13:32:13.859789] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:00.892 [2024-05-15 13:32:13.859882] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76257 ] 00:14:00.892 [2024-05-15 13:32:13.980759] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:01.154 [2024-05-15 13:32:14.000043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.154 [2024-05-15 13:32:14.065391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.412  Copying: 512/512 [B] (average 500 kBps) 00:14:01.412 00:14:01.412 13:32:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fwj7w31w2xa3fqy0zpek5c76d44jelqt7kuamzfzgp5cnlbx9wszuz936t25pwg950139nxmyfl5cyr1qlr7a8tcpt8veo2jvn4l0h36itd3sdy33hlbruwtcry1sd8eyibwyrlz3h1dy90ya61yggcvwyal4q1vvjh6ryvqe9xkslbi1yowp8tlomgg2tsua5qq2oj7efuw3mv4pzk5ceewnai0s18o3i4s8qegijfb90tsjmqt18laqh2ld02ij21g9ngm1ydtv0y7jsvvn3fngr64j42j1085dct7lxol1408ig486w4sljpps3hqjvclx2k2h2qc8q58t5ibqdvi6a0beywgbyqnj0hropjj8gyoz01kmrj81tpf20ocqiav05hgcrbn7nf7k68057mx46qiikcuszq39akhxz3ji81awp3jtrg084r8pfzgktjugdkxf7lar7ufptfmz484qfwjl2zpj39gc60wontplk0q87twvyeu3rzwsjm6 == \f\w\j\7\w\3\1\w\2\x\a\3\f\q\y\0\z\p\e\k\5\c\7\6\d\4\4\j\e\l\q\t\7\k\u\a\m\z\f\z\g\p\5\c\n\l\b\x\9\w\s\z\u\z\9\3\6\t\2\5\p\w\g\9\5\0\1\3\9\n\x\m\y\f\l\5\c\y\r\1\q\l\r\7\a\8\t\c\p\t\8\v\e\o\2\j\v\n\4\l\0\h\3\6\i\t\d\3\s\d\y\3\3\h\l\b\r\u\w\t\c\r\y\1\s\d\8\e\y\i\b\w\y\r\l\z\3\h\1\d\y\9\0\y\a\6\1\y\g\g\c\v\w\y\a\l\4\q\1\v\v\j\h\6\r\y\v\q\e\9\x\k\s\l\b\i\1\y\o\w\p\8\t\l\o\m\g\g\2\t\s\u\a\5\q\q\2\o\j\7\e\f\u\w\3\m\v\4\p\z\k\5\c\e\e\w\n\a\i\0\s\1\8\o\3\i\4\s\8\q\e\g\i\j\f\b\9\0\t\s\j\m\q\t\1\8\l\a\q\h\2\l\d\0\2\i\j\2\1\g\9\n\g\m\1\y\d\t\v\0\y\7\j\s\v\v\n\3\f\n\g\r\6\4\j\4\2\j\1\0\8\5\d\c\t\7\l\x\o\l\1\4\0\8\i\g\4\8\6\w\4\s\l\j\p\p\s\3\h\q\j\v\c\l\x\2\k\2\h\2\q\c\8\q\5\8\t\5\i\b\q\d\v\i\6\a\0\b\e\y\w\g\b\y\q\n\j\0\h\r\o\p\j\j\8\g\y\o\z\0\1\k\m\r\j\8\1\t\p\f\2\0\o\c\q\i\a\v\0\5\h\g\c\r\b\n\7\n\f\7\k\6\8\0\5\7\m\x\4\6\q\i\i\k\c\u\s\z\q\3\9\a\k\h\x\z\3\j\i\8\1\a\w\p\3\j\t\r\g\0\8\4\r\8\p\f\z\g\k\t\j\u\g\d\k\x\f\7\l\a\r\7\u\f\p\t\f\m\z\4\8\4\q\f\w\j\l\2\z\p\j\3\9\g\c\6\0\w\o\n\t\p\l\k\0\q\8\7\t\w\v\y\e\u\3\r\z\w\s\j\m\6 ]] 00:14:01.412 13:32:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:14:01.412 13:32:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:14:01.412 [2024-05-15 13:32:14.372925] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:01.412 [2024-05-15 13:32:14.373045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76261 ] 00:14:01.412 [2024-05-15 13:32:14.497198] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:01.412 [2024-05-15 13:32:14.510725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.671 [2024-05-15 13:32:14.562982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.929  Copying: 512/512 [B] (average 500 kBps) 00:14:01.929 00:14:01.929 13:32:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fwj7w31w2xa3fqy0zpek5c76d44jelqt7kuamzfzgp5cnlbx9wszuz936t25pwg950139nxmyfl5cyr1qlr7a8tcpt8veo2jvn4l0h36itd3sdy33hlbruwtcry1sd8eyibwyrlz3h1dy90ya61yggcvwyal4q1vvjh6ryvqe9xkslbi1yowp8tlomgg2tsua5qq2oj7efuw3mv4pzk5ceewnai0s18o3i4s8qegijfb90tsjmqt18laqh2ld02ij21g9ngm1ydtv0y7jsvvn3fngr64j42j1085dct7lxol1408ig486w4sljpps3hqjvclx2k2h2qc8q58t5ibqdvi6a0beywgbyqnj0hropjj8gyoz01kmrj81tpf20ocqiav05hgcrbn7nf7k68057mx46qiikcuszq39akhxz3ji81awp3jtrg084r8pfzgktjugdkxf7lar7ufptfmz484qfwjl2zpj39gc60wontplk0q87twvyeu3rzwsjm6 == \f\w\j\7\w\3\1\w\2\x\a\3\f\q\y\0\z\p\e\k\5\c\7\6\d\4\4\j\e\l\q\t\7\k\u\a\m\z\f\z\g\p\5\c\n\l\b\x\9\w\s\z\u\z\9\3\6\t\2\5\p\w\g\9\5\0\1\3\9\n\x\m\y\f\l\5\c\y\r\1\q\l\r\7\a\8\t\c\p\t\8\v\e\o\2\j\v\n\4\l\0\h\3\6\i\t\d\3\s\d\y\3\3\h\l\b\r\u\w\t\c\r\y\1\s\d\8\e\y\i\b\w\y\r\l\z\3\h\1\d\y\9\0\y\a\6\1\y\g\g\c\v\w\y\a\l\4\q\1\v\v\j\h\6\r\y\v\q\e\9\x\k\s\l\b\i\1\y\o\w\p\8\t\l\o\m\g\g\2\t\s\u\a\5\q\q\2\o\j\7\e\f\u\w\3\m\v\4\p\z\k\5\c\e\e\w\n\a\i\0\s\1\8\o\3\i\4\s\8\q\e\g\i\j\f\b\9\0\t\s\j\m\q\t\1\8\l\a\q\h\2\l\d\0\2\i\j\2\1\g\9\n\g\m\1\y\d\t\v\0\y\7\j\s\v\v\n\3\f\n\g\r\6\4\j\4\2\j\1\0\8\5\d\c\t\7\l\x\o\l\1\4\0\8\i\g\4\8\6\w\4\s\l\j\p\p\s\3\h\q\j\v\c\l\x\2\k\2\h\2\q\c\8\q\5\8\t\5\i\b\q\d\v\i\6\a\0\b\e\y\w\g\b\y\q\n\j\0\h\r\o\p\j\j\8\g\y\o\z\0\1\k\m\r\j\8\1\t\p\f\2\0\o\c\q\i\a\v\0\5\h\g\c\r\b\n\7\n\f\7\k\6\8\0\5\7\m\x\4\6\q\i\i\k\c\u\s\z\q\3\9\a\k\h\x\z\3\j\i\8\1\a\w\p\3\j\t\r\g\0\8\4\r\8\p\f\z\g\k\t\j\u\g\d\k\x\f\7\l\a\r\7\u\f\p\t\f\m\z\4\8\4\q\f\w\j\l\2\z\p\j\3\9\g\c\6\0\w\o\n\t\p\l\k\0\q\8\7\t\w\v\y\e\u\3\r\z\w\s\j\m\6 ]] 00:14:01.929 13:32:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:14:01.929 13:32:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:14:01.929 [2024-05-15 13:32:14.856251] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:01.929 [2024-05-15 13:32:14.856835] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76276 ] 00:14:01.929 [2024-05-15 13:32:14.983585] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:01.929 [2024-05-15 13:32:15.005716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.186 [2024-05-15 13:32:15.063765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.445  Copying: 512/512 [B] (average 500 kBps) 00:14:02.445 00:14:02.445 13:32:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fwj7w31w2xa3fqy0zpek5c76d44jelqt7kuamzfzgp5cnlbx9wszuz936t25pwg950139nxmyfl5cyr1qlr7a8tcpt8veo2jvn4l0h36itd3sdy33hlbruwtcry1sd8eyibwyrlz3h1dy90ya61yggcvwyal4q1vvjh6ryvqe9xkslbi1yowp8tlomgg2tsua5qq2oj7efuw3mv4pzk5ceewnai0s18o3i4s8qegijfb90tsjmqt18laqh2ld02ij21g9ngm1ydtv0y7jsvvn3fngr64j42j1085dct7lxol1408ig486w4sljpps3hqjvclx2k2h2qc8q58t5ibqdvi6a0beywgbyqnj0hropjj8gyoz01kmrj81tpf20ocqiav05hgcrbn7nf7k68057mx46qiikcuszq39akhxz3ji81awp3jtrg084r8pfzgktjugdkxf7lar7ufptfmz484qfwjl2zpj39gc60wontplk0q87twvyeu3rzwsjm6 == \f\w\j\7\w\3\1\w\2\x\a\3\f\q\y\0\z\p\e\k\5\c\7\6\d\4\4\j\e\l\q\t\7\k\u\a\m\z\f\z\g\p\5\c\n\l\b\x\9\w\s\z\u\z\9\3\6\t\2\5\p\w\g\9\5\0\1\3\9\n\x\m\y\f\l\5\c\y\r\1\q\l\r\7\a\8\t\c\p\t\8\v\e\o\2\j\v\n\4\l\0\h\3\6\i\t\d\3\s\d\y\3\3\h\l\b\r\u\w\t\c\r\y\1\s\d\8\e\y\i\b\w\y\r\l\z\3\h\1\d\y\9\0\y\a\6\1\y\g\g\c\v\w\y\a\l\4\q\1\v\v\j\h\6\r\y\v\q\e\9\x\k\s\l\b\i\1\y\o\w\p\8\t\l\o\m\g\g\2\t\s\u\a\5\q\q\2\o\j\7\e\f\u\w\3\m\v\4\p\z\k\5\c\e\e\w\n\a\i\0\s\1\8\o\3\i\4\s\8\q\e\g\i\j\f\b\9\0\t\s\j\m\q\t\1\8\l\a\q\h\2\l\d\0\2\i\j\2\1\g\9\n\g\m\1\y\d\t\v\0\y\7\j\s\v\v\n\3\f\n\g\r\6\4\j\4\2\j\1\0\8\5\d\c\t\7\l\x\o\l\1\4\0\8\i\g\4\8\6\w\4\s\l\j\p\p\s\3\h\q\j\v\c\l\x\2\k\2\h\2\q\c\8\q\5\8\t\5\i\b\q\d\v\i\6\a\0\b\e\y\w\g\b\y\q\n\j\0\h\r\o\p\j\j\8\g\y\o\z\0\1\k\m\r\j\8\1\t\p\f\2\0\o\c\q\i\a\v\0\5\h\g\c\r\b\n\7\n\f\7\k\6\8\0\5\7\m\x\4\6\q\i\i\k\c\u\s\z\q\3\9\a\k\h\x\z\3\j\i\8\1\a\w\p\3\j\t\r\g\0\8\4\r\8\p\f\z\g\k\t\j\u\g\d\k\x\f\7\l\a\r\7\u\f\p\t\f\m\z\4\8\4\q\f\w\j\l\2\z\p\j\3\9\g\c\6\0\w\o\n\t\p\l\k\0\q\8\7\t\w\v\y\e\u\3\r\z\w\s\j\m\6 ]] 00:14:02.445 13:32:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:14:02.445 13:32:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:14:02.445 [2024-05-15 13:32:15.389271] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:02.445 [2024-05-15 13:32:15.389893] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76280 ] 00:14:02.445 [2024-05-15 13:32:15.516319] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:02.445 [2024-05-15 13:32:15.535510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.703 [2024-05-15 13:32:15.588430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.961  Copying: 512/512 [B] (average 500 kBps) 00:14:02.961 00:14:02.961 13:32:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fwj7w31w2xa3fqy0zpek5c76d44jelqt7kuamzfzgp5cnlbx9wszuz936t25pwg950139nxmyfl5cyr1qlr7a8tcpt8veo2jvn4l0h36itd3sdy33hlbruwtcry1sd8eyibwyrlz3h1dy90ya61yggcvwyal4q1vvjh6ryvqe9xkslbi1yowp8tlomgg2tsua5qq2oj7efuw3mv4pzk5ceewnai0s18o3i4s8qegijfb90tsjmqt18laqh2ld02ij21g9ngm1ydtv0y7jsvvn3fngr64j42j1085dct7lxol1408ig486w4sljpps3hqjvclx2k2h2qc8q58t5ibqdvi6a0beywgbyqnj0hropjj8gyoz01kmrj81tpf20ocqiav05hgcrbn7nf7k68057mx46qiikcuszq39akhxz3ji81awp3jtrg084r8pfzgktjugdkxf7lar7ufptfmz484qfwjl2zpj39gc60wontplk0q87twvyeu3rzwsjm6 == \f\w\j\7\w\3\1\w\2\x\a\3\f\q\y\0\z\p\e\k\5\c\7\6\d\4\4\j\e\l\q\t\7\k\u\a\m\z\f\z\g\p\5\c\n\l\b\x\9\w\s\z\u\z\9\3\6\t\2\5\p\w\g\9\5\0\1\3\9\n\x\m\y\f\l\5\c\y\r\1\q\l\r\7\a\8\t\c\p\t\8\v\e\o\2\j\v\n\4\l\0\h\3\6\i\t\d\3\s\d\y\3\3\h\l\b\r\u\w\t\c\r\y\1\s\d\8\e\y\i\b\w\y\r\l\z\3\h\1\d\y\9\0\y\a\6\1\y\g\g\c\v\w\y\a\l\4\q\1\v\v\j\h\6\r\y\v\q\e\9\x\k\s\l\b\i\1\y\o\w\p\8\t\l\o\m\g\g\2\t\s\u\a\5\q\q\2\o\j\7\e\f\u\w\3\m\v\4\p\z\k\5\c\e\e\w\n\a\i\0\s\1\8\o\3\i\4\s\8\q\e\g\i\j\f\b\9\0\t\s\j\m\q\t\1\8\l\a\q\h\2\l\d\0\2\i\j\2\1\g\9\n\g\m\1\y\d\t\v\0\y\7\j\s\v\v\n\3\f\n\g\r\6\4\j\4\2\j\1\0\8\5\d\c\t\7\l\x\o\l\1\4\0\8\i\g\4\8\6\w\4\s\l\j\p\p\s\3\h\q\j\v\c\l\x\2\k\2\h\2\q\c\8\q\5\8\t\5\i\b\q\d\v\i\6\a\0\b\e\y\w\g\b\y\q\n\j\0\h\r\o\p\j\j\8\g\y\o\z\0\1\k\m\r\j\8\1\t\p\f\2\0\o\c\q\i\a\v\0\5\h\g\c\r\b\n\7\n\f\7\k\6\8\0\5\7\m\x\4\6\q\i\i\k\c\u\s\z\q\3\9\a\k\h\x\z\3\j\i\8\1\a\w\p\3\j\t\r\g\0\8\4\r\8\p\f\z\g\k\t\j\u\g\d\k\x\f\7\l\a\r\7\u\f\p\t\f\m\z\4\8\4\q\f\w\j\l\2\z\p\j\3\9\g\c\6\0\w\o\n\t\p\l\k\0\q\8\7\t\w\v\y\e\u\3\r\z\w\s\j\m\6 ]] 00:14:02.961 00:14:02.961 real 0m4.006s 00:14:02.961 user 0m2.064s 00:14:02.961 sys 0m1.951s 00:14:02.961 13:32:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:02.961 13:32:15 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:14:02.961 ************************************ 00:14:02.961 END TEST dd_flags_misc 00:14:02.961 ************************************ 00:14:02.961 13:32:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:14:02.961 13:32:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:14:02.961 * Second test run, disabling liburing, forcing AIO 00:14:02.961 13:32:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:14:02.961 13:32:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:14:02.961 13:32:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:02.961 13:32:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:02.961 13:32:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:14:02.961 ************************************ 00:14:02.961 START TEST dd_flag_append_forced_aio 00:14:02.961 ************************************ 00:14:02.961 13:32:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1121 -- # append 00:14:02.961 13:32:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:14:02.961 13:32:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:14:02.961 13:32:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:14:02.961 13:32:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:14:02.961 13:32:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:14:02.961 13:32:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=nzyn4ahz1rzrgqjw8k8g41gbllrofe0u 00:14:02.961 13:32:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:14:02.961 13:32:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:14:02.961 13:32:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:14:02.961 13:32:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=03kkuurc6fet7q3gtzwfqm9cm1n840an 00:14:02.961 13:32:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s nzyn4ahz1rzrgqjw8k8g41gbllrofe0u 00:14:02.961 13:32:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 03kkuurc6fet7q3gtzwfqm9cm1n840an 00:14:02.961 13:32:15 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:14:02.961 [2024-05-15 13:32:15.944959] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:02.961 [2024-05-15 13:32:15.945050] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76314 ] 00:14:03.219 [2024-05-15 13:32:16.066751] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:03.219 [2024-05-15 13:32:16.083501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.219 [2024-05-15 13:32:16.148023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.476  Copying: 32/32 [B] (average 31 kBps) 00:14:03.476 00:14:03.476 13:32:16 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 03kkuurc6fet7q3gtzwfqm9cm1n840annzyn4ahz1rzrgqjw8k8g41gbllrofe0u == \0\3\k\k\u\u\r\c\6\f\e\t\7\q\3\g\t\z\w\f\q\m\9\c\m\1\n\8\4\0\a\n\n\z\y\n\4\a\h\z\1\r\z\r\g\q\j\w\8\k\8\g\4\1\g\b\l\l\r\o\f\e\0\u ]] 00:14:03.476 00:14:03.476 real 0m0.547s 00:14:03.476 user 0m0.278s 00:14:03.476 sys 0m0.148s 00:14:03.476 13:32:16 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:03.476 13:32:16 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:14:03.476 ************************************ 00:14:03.476 END TEST dd_flag_append_forced_aio 00:14:03.476 ************************************ 00:14:03.476 13:32:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:14:03.476 13:32:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:03.476 13:32:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:03.476 13:32:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:14:03.476 ************************************ 00:14:03.476 START TEST dd_flag_directory_forced_aio 00:14:03.476 ************************************ 00:14:03.476 13:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1121 -- # directory 00:14:03.476 13:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:03.476 13:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:14:03.476 13:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:03.476 13:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:03.476 13:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.476 13:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:03.477 13:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.477 13:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:03.477 13:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.477 13:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:03.477 13:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:03.477 13:32:16 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:03.477 [2024-05-15 13:32:16.537258] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:03.477 [2024-05-15 13:32:16.537362] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76341 ] 00:14:03.734 [2024-05-15 13:32:16.664759] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:03.734 [2024-05-15 13:32:16.679162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.734 [2024-05-15 13:32:16.755625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.734 [2024-05-15 13:32:16.827150] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:14:03.734 [2024-05-15 13:32:16.827222] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:14:03.734 [2024-05-15 13:32:16.827249] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:03.992 [2024-05-15 13:32:16.921306] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:14:03.992 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:14:03.992 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:03.992 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:14:03.992 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:14:03.992 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:14:03.992 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:03.992 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:14:03.992 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:14:03.992 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:14:03.992 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:03.992 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.992 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:03.992 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.992 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:03.992 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:03.992 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:03.992 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:03.992 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:14:03.992 [2024-05-15 13:32:17.052908] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:03.992 [2024-05-15 13:32:17.053011] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76350 ] 00:14:04.250 [2024-05-15 13:32:17.174492] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:04.250 [2024-05-15 13:32:17.192208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.250 [2024-05-15 13:32:17.275257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.250 [2024-05-15 13:32:17.344784] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:14:04.250 [2024-05-15 13:32:17.344847] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:14:04.250 [2024-05-15 13:32:17.344863] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:04.509 [2024-05-15 13:32:17.438867] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:04.509 00:14:04.509 real 0m1.042s 00:14:04.509 user 0m0.534s 00:14:04.509 sys 0m0.296s 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:14:04.509 ************************************ 00:14:04.509 END TEST dd_flag_directory_forced_aio 00:14:04.509 ************************************ 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:14:04.509 ************************************ 00:14:04.509 START TEST dd_flag_nofollow_forced_aio 00:14:04.509 ************************************ 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1121 -- # nofollow 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:04.509 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:04.510 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:04.510 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:04.510 13:32:17 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:04.767 [2024-05-15 13:32:17.635759] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:04.767 [2024-05-15 13:32:17.635847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76379 ] 00:14:04.767 [2024-05-15 13:32:17.761289] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:04.767 [2024-05-15 13:32:17.781461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.026 [2024-05-15 13:32:17.874055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.026 [2024-05-15 13:32:18.009751] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:14:05.026 [2024-05-15 13:32:18.009836] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:14:05.026 [2024-05-15 13:32:18.009859] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:05.284 [2024-05-15 13:32:18.188888] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:14:05.284 13:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:14:05.284 13:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:05.284 13:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:14:05.284 13:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:14:05.284 13:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:14:05.284 13:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:05.284 13:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:14:05.284 13:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:14:05.284 13:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:14:05.284 13:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:05.284 13:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:05.284 13:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:05.284 13:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:05.284 13:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:05.284 13:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:05.284 13:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:05.284 13:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:05.284 13:32:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:14:05.284 [2024-05-15 13:32:18.380425] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:05.285 [2024-05-15 13:32:18.380556] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76388 ] 00:14:05.542 [2024-05-15 13:32:18.502889] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:05.542 [2024-05-15 13:32:18.521422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.542 [2024-05-15 13:32:18.608373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.801 [2024-05-15 13:32:18.732378] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:14:05.801 [2024-05-15 13:32:18.732444] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:14:05.801 [2024-05-15 13:32:18.732461] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:06.131 [2024-05-15 13:32:18.909241] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:14:06.131 13:32:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:14:06.131 13:32:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:06.131 13:32:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:14:06.131 13:32:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:14:06.131 13:32:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:14:06.131 13:32:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:06.131 13:32:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:14:06.131 13:32:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:14:06.131 13:32:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:14:06.131 13:32:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:06.131 [2024-05-15 13:32:19.091505] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:06.131 [2024-05-15 13:32:19.091599] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76401 ] 00:14:06.389 [2024-05-15 13:32:19.213175] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:06.390 [2024-05-15 13:32:19.230142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.390 [2024-05-15 13:32:19.316416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.649  Copying: 512/512 [B] (average 500 kBps) 00:14:06.649 00:14:06.649 13:32:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 2hhtg53wp1u7w7q7hun3o57ygyzrwljwpaps4cig52o3hn6xfnvo5oq9iql3mff6h9c4ul994m99t85bn9smkibp2s3fg4ul2fcprikzhm50bqs3kk74oso90pjwarlo4p30870aibjpicm6u4ptmlca9am4o19b4sdk2ycw7s0gg73dg7cz98r4xt3ofegcjjfgx3ybn0x8znhyeb4sfpgf5xacvg4jm9u6ta3bs0o24zwe5pt8q6gnvmez9o9xok1fi8grct62rgm89sg0en8bj9irhunrnwj15uowfd346s267ofj67k8ukzntnztbtp9zob6rzcal9dfh4bmjohc45ttpnozvkkkp8h2gq1662vf0nvbphozqoy7xfbcqzkrz54hatyd1qqap3v57dplsjo30xvgnx9kcczhlg0dx69bq6zauibwjicm4pxy13pazl2avcb0mjvrz4ajkxml95y5ucr3g19gmnvx5qc12vsz6nv12gyiwgahejvz == \2\h\h\t\g\5\3\w\p\1\u\7\w\7\q\7\h\u\n\3\o\5\7\y\g\y\z\r\w\l\j\w\p\a\p\s\4\c\i\g\5\2\o\3\h\n\6\x\f\n\v\o\5\o\q\9\i\q\l\3\m\f\f\6\h\9\c\4\u\l\9\9\4\m\9\9\t\8\5\b\n\9\s\m\k\i\b\p\2\s\3\f\g\4\u\l\2\f\c\p\r\i\k\z\h\m\5\0\b\q\s\3\k\k\7\4\o\s\o\9\0\p\j\w\a\r\l\o\4\p\3\0\8\7\0\a\i\b\j\p\i\c\m\6\u\4\p\t\m\l\c\a\9\a\m\4\o\1\9\b\4\s\d\k\2\y\c\w\7\s\0\g\g\7\3\d\g\7\c\z\9\8\r\4\x\t\3\o\f\e\g\c\j\j\f\g\x\3\y\b\n\0\x\8\z\n\h\y\e\b\4\s\f\p\g\f\5\x\a\c\v\g\4\j\m\9\u\6\t\a\3\b\s\0\o\2\4\z\w\e\5\p\t\8\q\6\g\n\v\m\e\z\9\o\9\x\o\k\1\f\i\8\g\r\c\t\6\2\r\g\m\8\9\s\g\0\e\n\8\b\j\9\i\r\h\u\n\r\n\w\j\1\5\u\o\w\f\d\3\4\6\s\2\6\7\o\f\j\6\7\k\8\u\k\z\n\t\n\z\t\b\t\p\9\z\o\b\6\r\z\c\a\l\9\d\f\h\4\b\m\j\o\h\c\4\5\t\t\p\n\o\z\v\k\k\k\p\8\h\2\g\q\1\6\6\2\v\f\0\n\v\b\p\h\o\z\q\o\y\7\x\f\b\c\q\z\k\r\z\5\4\h\a\t\y\d\1\q\q\a\p\3\v\5\7\d\p\l\s\j\o\3\0\x\v\g\n\x\9\k\c\c\z\h\l\g\0\d\x\6\9\b\q\6\z\a\u\i\b\w\j\i\c\m\4\p\x\y\1\3\p\a\z\l\2\a\v\c\b\0\m\j\v\r\z\4\a\j\k\x\m\l\9\5\y\5\u\c\r\3\g\1\9\g\m\n\v\x\5\q\c\1\2\v\s\z\6\n\v\1\2\g\y\i\w\g\a\h\e\j\v\z ]] 00:14:06.649 00:14:06.649 real 0m2.136s 00:14:06.649 user 0m1.149s 00:14:06.649 sys 0m0.648s 00:14:06.649 13:32:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:06.649 13:32:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:14:06.649 ************************************ 00:14:06.649 END TEST dd_flag_nofollow_forced_aio 00:14:06.649 ************************************ 00:14:06.649 13:32:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:14:06.649 13:32:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:06.649 13:32:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:06.649 13:32:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:14:06.906 ************************************ 00:14:06.906 START TEST dd_flag_noatime_forced_aio 00:14:06.906 ************************************ 00:14:06.906 13:32:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1121 -- # noatime 00:14:06.906 13:32:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:14:06.906 13:32:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:14:06.906 13:32:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:14:06.906 13:32:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:14:06.906 13:32:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:14:06.906 13:32:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:06.906 13:32:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1715779939 00:14:06.906 13:32:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:06.906 13:32:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1715779939 00:14:06.906 13:32:19 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:14:07.841 13:32:20 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:07.841 [2024-05-15 13:32:20.833775] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:07.841 [2024-05-15 13:32:20.833927] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76442 ] 00:14:08.099 [2024-05-15 13:32:20.967678] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:08.100 [2024-05-15 13:32:20.985484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.100 [2024-05-15 13:32:21.054096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.421  Copying: 512/512 [B] (average 500 kBps) 00:14:08.421 00:14:08.421 13:32:21 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:08.421 13:32:21 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1715779939 )) 00:14:08.421 13:32:21 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:08.421 13:32:21 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1715779939 )) 00:14:08.421 13:32:21 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:08.421 [2024-05-15 13:32:21.422630] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:08.421 [2024-05-15 13:32:21.423230] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76453 ] 00:14:08.680 [2024-05-15 13:32:21.553419] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:08.680 [2024-05-15 13:32:21.573746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.680 [2024-05-15 13:32:21.661487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.938  Copying: 512/512 [B] (average 500 kBps) 00:14:08.938 00:14:08.938 13:32:21 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:08.938 13:32:21 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1715779941 )) 00:14:08.938 00:14:08.938 real 0m2.201s 00:14:08.938 user 0m0.598s 00:14:08.938 sys 0m0.350s 00:14:08.938 13:32:21 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:08.938 ************************************ 00:14:08.938 END TEST dd_flag_noatime_forced_aio 00:14:08.938 ************************************ 00:14:08.938 13:32:21 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:14:08.938 13:32:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:14:08.938 13:32:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:08.938 13:32:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:08.938 13:32:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:14:08.938 ************************************ 00:14:08.938 START TEST dd_flags_misc_forced_aio 00:14:08.938 ************************************ 00:14:08.938 13:32:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1121 -- # io 00:14:08.938 13:32:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:14:08.938 13:32:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:14:08.938 13:32:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:14:08.938 13:32:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:14:08.938 13:32:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:14:08.938 13:32:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:14:08.938 13:32:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:14:08.938 13:32:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:14:08.938 13:32:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:14:09.196 [2024-05-15 13:32:22.074972] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:09.196 [2024-05-15 13:32:22.075370] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76485 ] 00:14:09.196 [2024-05-15 13:32:22.202033] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:09.196 [2024-05-15 13:32:22.216401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.454 [2024-05-15 13:32:22.298548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.712  Copying: 512/512 [B] (average 500 kBps) 00:14:09.712 00:14:09.712 13:32:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ z9xvr71a9gq1ct6mr0c9rp498l14uct2t39z0g34dghqdyiwrvh845sz5csa5tpizxzvqbyhoyl44jeygdfvsdne5hv51bib5xz702wdcpm82li8uv5uxw8hwii1eh51bcsnfvuc9jx9nw8utz2n2b0v9zs10us87keivffzy1xky68pyfnacr3cfa6iwizuoy9d3e1xguugt8u12ng71pknwyj2xwjymp6gt6em8tpcw0m8l4x1zohutwzrklrvnjj5obqtlccv1gytgu1mv0ufzlmwbom24wib2gqyre2b0dqjzzdk4k38r3d9vxz5zot1b9tx4wiyh8h3usoofbx9g7mxs6y07uo7ds5p51k4fzk5nnnjoitgrurj1pxqgazpzc75o1vr650i5hsvnls9dnyhuafphp77rwgof7xey49royunvi0w3jol3nb1hssoyiusc03m8od1ic3lp4mrjy3xbji883g4lagp8vfdxvbfwut9l2qsdm13dtz5 == \z\9\x\v\r\7\1\a\9\g\q\1\c\t\6\m\r\0\c\9\r\p\4\9\8\l\1\4\u\c\t\2\t\3\9\z\0\g\3\4\d\g\h\q\d\y\i\w\r\v\h\8\4\5\s\z\5\c\s\a\5\t\p\i\z\x\z\v\q\b\y\h\o\y\l\4\4\j\e\y\g\d\f\v\s\d\n\e\5\h\v\5\1\b\i\b\5\x\z\7\0\2\w\d\c\p\m\8\2\l\i\8\u\v\5\u\x\w\8\h\w\i\i\1\e\h\5\1\b\c\s\n\f\v\u\c\9\j\x\9\n\w\8\u\t\z\2\n\2\b\0\v\9\z\s\1\0\u\s\8\7\k\e\i\v\f\f\z\y\1\x\k\y\6\8\p\y\f\n\a\c\r\3\c\f\a\6\i\w\i\z\u\o\y\9\d\3\e\1\x\g\u\u\g\t\8\u\1\2\n\g\7\1\p\k\n\w\y\j\2\x\w\j\y\m\p\6\g\t\6\e\m\8\t\p\c\w\0\m\8\l\4\x\1\z\o\h\u\t\w\z\r\k\l\r\v\n\j\j\5\o\b\q\t\l\c\c\v\1\g\y\t\g\u\1\m\v\0\u\f\z\l\m\w\b\o\m\2\4\w\i\b\2\g\q\y\r\e\2\b\0\d\q\j\z\z\d\k\4\k\3\8\r\3\d\9\v\x\z\5\z\o\t\1\b\9\t\x\4\w\i\y\h\8\h\3\u\s\o\o\f\b\x\9\g\7\m\x\s\6\y\0\7\u\o\7\d\s\5\p\5\1\k\4\f\z\k\5\n\n\n\j\o\i\t\g\r\u\r\j\1\p\x\q\g\a\z\p\z\c\7\5\o\1\v\r\6\5\0\i\5\h\s\v\n\l\s\9\d\n\y\h\u\a\f\p\h\p\7\7\r\w\g\o\f\7\x\e\y\4\9\r\o\y\u\n\v\i\0\w\3\j\o\l\3\n\b\1\h\s\s\o\y\i\u\s\c\0\3\m\8\o\d\1\i\c\3\l\p\4\m\r\j\y\3\x\b\j\i\8\8\3\g\4\l\a\g\p\8\v\f\d\x\v\b\f\w\u\t\9\l\2\q\s\d\m\1\3\d\t\z\5 ]] 00:14:09.712 13:32:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:14:09.712 13:32:22 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:14:09.712 [2024-05-15 13:32:22.622571] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:09.712 [2024-05-15 13:32:22.622996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76493 ] 00:14:09.712 [2024-05-15 13:32:22.755386] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:09.712 [2024-05-15 13:32:22.770128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.969 [2024-05-15 13:32:22.847917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.240  Copying: 512/512 [B] (average 500 kBps) 00:14:10.240 00:14:10.240 13:32:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ z9xvr71a9gq1ct6mr0c9rp498l14uct2t39z0g34dghqdyiwrvh845sz5csa5tpizxzvqbyhoyl44jeygdfvsdne5hv51bib5xz702wdcpm82li8uv5uxw8hwii1eh51bcsnfvuc9jx9nw8utz2n2b0v9zs10us87keivffzy1xky68pyfnacr3cfa6iwizuoy9d3e1xguugt8u12ng71pknwyj2xwjymp6gt6em8tpcw0m8l4x1zohutwzrklrvnjj5obqtlccv1gytgu1mv0ufzlmwbom24wib2gqyre2b0dqjzzdk4k38r3d9vxz5zot1b9tx4wiyh8h3usoofbx9g7mxs6y07uo7ds5p51k4fzk5nnnjoitgrurj1pxqgazpzc75o1vr650i5hsvnls9dnyhuafphp77rwgof7xey49royunvi0w3jol3nb1hssoyiusc03m8od1ic3lp4mrjy3xbji883g4lagp8vfdxvbfwut9l2qsdm13dtz5 == \z\9\x\v\r\7\1\a\9\g\q\1\c\t\6\m\r\0\c\9\r\p\4\9\8\l\1\4\u\c\t\2\t\3\9\z\0\g\3\4\d\g\h\q\d\y\i\w\r\v\h\8\4\5\s\z\5\c\s\a\5\t\p\i\z\x\z\v\q\b\y\h\o\y\l\4\4\j\e\y\g\d\f\v\s\d\n\e\5\h\v\5\1\b\i\b\5\x\z\7\0\2\w\d\c\p\m\8\2\l\i\8\u\v\5\u\x\w\8\h\w\i\i\1\e\h\5\1\b\c\s\n\f\v\u\c\9\j\x\9\n\w\8\u\t\z\2\n\2\b\0\v\9\z\s\1\0\u\s\8\7\k\e\i\v\f\f\z\y\1\x\k\y\6\8\p\y\f\n\a\c\r\3\c\f\a\6\i\w\i\z\u\o\y\9\d\3\e\1\x\g\u\u\g\t\8\u\1\2\n\g\7\1\p\k\n\w\y\j\2\x\w\j\y\m\p\6\g\t\6\e\m\8\t\p\c\w\0\m\8\l\4\x\1\z\o\h\u\t\w\z\r\k\l\r\v\n\j\j\5\o\b\q\t\l\c\c\v\1\g\y\t\g\u\1\m\v\0\u\f\z\l\m\w\b\o\m\2\4\w\i\b\2\g\q\y\r\e\2\b\0\d\q\j\z\z\d\k\4\k\3\8\r\3\d\9\v\x\z\5\z\o\t\1\b\9\t\x\4\w\i\y\h\8\h\3\u\s\o\o\f\b\x\9\g\7\m\x\s\6\y\0\7\u\o\7\d\s\5\p\5\1\k\4\f\z\k\5\n\n\n\j\o\i\t\g\r\u\r\j\1\p\x\q\g\a\z\p\z\c\7\5\o\1\v\r\6\5\0\i\5\h\s\v\n\l\s\9\d\n\y\h\u\a\f\p\h\p\7\7\r\w\g\o\f\7\x\e\y\4\9\r\o\y\u\n\v\i\0\w\3\j\o\l\3\n\b\1\h\s\s\o\y\i\u\s\c\0\3\m\8\o\d\1\i\c\3\l\p\4\m\r\j\y\3\x\b\j\i\8\8\3\g\4\l\a\g\p\8\v\f\d\x\v\b\f\w\u\t\9\l\2\q\s\d\m\1\3\d\t\z\5 ]] 00:14:10.240 13:32:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:14:10.240 13:32:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:14:10.240 [2024-05-15 13:32:23.188583] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:10.240 [2024-05-15 13:32:23.188973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76500 ] 00:14:10.240 [2024-05-15 13:32:23.313468] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:10.507 [2024-05-15 13:32:23.329731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.507 [2024-05-15 13:32:23.382762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.768  Copying: 512/512 [B] (average 125 kBps) 00:14:10.768 00:14:10.769 13:32:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ z9xvr71a9gq1ct6mr0c9rp498l14uct2t39z0g34dghqdyiwrvh845sz5csa5tpizxzvqbyhoyl44jeygdfvsdne5hv51bib5xz702wdcpm82li8uv5uxw8hwii1eh51bcsnfvuc9jx9nw8utz2n2b0v9zs10us87keivffzy1xky68pyfnacr3cfa6iwizuoy9d3e1xguugt8u12ng71pknwyj2xwjymp6gt6em8tpcw0m8l4x1zohutwzrklrvnjj5obqtlccv1gytgu1mv0ufzlmwbom24wib2gqyre2b0dqjzzdk4k38r3d9vxz5zot1b9tx4wiyh8h3usoofbx9g7mxs6y07uo7ds5p51k4fzk5nnnjoitgrurj1pxqgazpzc75o1vr650i5hsvnls9dnyhuafphp77rwgof7xey49royunvi0w3jol3nb1hssoyiusc03m8od1ic3lp4mrjy3xbji883g4lagp8vfdxvbfwut9l2qsdm13dtz5 == \z\9\x\v\r\7\1\a\9\g\q\1\c\t\6\m\r\0\c\9\r\p\4\9\8\l\1\4\u\c\t\2\t\3\9\z\0\g\3\4\d\g\h\q\d\y\i\w\r\v\h\8\4\5\s\z\5\c\s\a\5\t\p\i\z\x\z\v\q\b\y\h\o\y\l\4\4\j\e\y\g\d\f\v\s\d\n\e\5\h\v\5\1\b\i\b\5\x\z\7\0\2\w\d\c\p\m\8\2\l\i\8\u\v\5\u\x\w\8\h\w\i\i\1\e\h\5\1\b\c\s\n\f\v\u\c\9\j\x\9\n\w\8\u\t\z\2\n\2\b\0\v\9\z\s\1\0\u\s\8\7\k\e\i\v\f\f\z\y\1\x\k\y\6\8\p\y\f\n\a\c\r\3\c\f\a\6\i\w\i\z\u\o\y\9\d\3\e\1\x\g\u\u\g\t\8\u\1\2\n\g\7\1\p\k\n\w\y\j\2\x\w\j\y\m\p\6\g\t\6\e\m\8\t\p\c\w\0\m\8\l\4\x\1\z\o\h\u\t\w\z\r\k\l\r\v\n\j\j\5\o\b\q\t\l\c\c\v\1\g\y\t\g\u\1\m\v\0\u\f\z\l\m\w\b\o\m\2\4\w\i\b\2\g\q\y\r\e\2\b\0\d\q\j\z\z\d\k\4\k\3\8\r\3\d\9\v\x\z\5\z\o\t\1\b\9\t\x\4\w\i\y\h\8\h\3\u\s\o\o\f\b\x\9\g\7\m\x\s\6\y\0\7\u\o\7\d\s\5\p\5\1\k\4\f\z\k\5\n\n\n\j\o\i\t\g\r\u\r\j\1\p\x\q\g\a\z\p\z\c\7\5\o\1\v\r\6\5\0\i\5\h\s\v\n\l\s\9\d\n\y\h\u\a\f\p\h\p\7\7\r\w\g\o\f\7\x\e\y\4\9\r\o\y\u\n\v\i\0\w\3\j\o\l\3\n\b\1\h\s\s\o\y\i\u\s\c\0\3\m\8\o\d\1\i\c\3\l\p\4\m\r\j\y\3\x\b\j\i\8\8\3\g\4\l\a\g\p\8\v\f\d\x\v\b\f\w\u\t\9\l\2\q\s\d\m\1\3\d\t\z\5 ]] 00:14:10.769 13:32:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:14:10.769 13:32:23 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:14:10.769 [2024-05-15 13:32:23.691595] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:10.769 [2024-05-15 13:32:23.691958] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76508 ] 00:14:10.769 [2024-05-15 13:32:23.813601] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:10.769 [2024-05-15 13:32:23.830377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.025 [2024-05-15 13:32:23.882950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.284  Copying: 512/512 [B] (average 250 kBps) 00:14:11.284 00:14:11.284 13:32:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ z9xvr71a9gq1ct6mr0c9rp498l14uct2t39z0g34dghqdyiwrvh845sz5csa5tpizxzvqbyhoyl44jeygdfvsdne5hv51bib5xz702wdcpm82li8uv5uxw8hwii1eh51bcsnfvuc9jx9nw8utz2n2b0v9zs10us87keivffzy1xky68pyfnacr3cfa6iwizuoy9d3e1xguugt8u12ng71pknwyj2xwjymp6gt6em8tpcw0m8l4x1zohutwzrklrvnjj5obqtlccv1gytgu1mv0ufzlmwbom24wib2gqyre2b0dqjzzdk4k38r3d9vxz5zot1b9tx4wiyh8h3usoofbx9g7mxs6y07uo7ds5p51k4fzk5nnnjoitgrurj1pxqgazpzc75o1vr650i5hsvnls9dnyhuafphp77rwgof7xey49royunvi0w3jol3nb1hssoyiusc03m8od1ic3lp4mrjy3xbji883g4lagp8vfdxvbfwut9l2qsdm13dtz5 == \z\9\x\v\r\7\1\a\9\g\q\1\c\t\6\m\r\0\c\9\r\p\4\9\8\l\1\4\u\c\t\2\t\3\9\z\0\g\3\4\d\g\h\q\d\y\i\w\r\v\h\8\4\5\s\z\5\c\s\a\5\t\p\i\z\x\z\v\q\b\y\h\o\y\l\4\4\j\e\y\g\d\f\v\s\d\n\e\5\h\v\5\1\b\i\b\5\x\z\7\0\2\w\d\c\p\m\8\2\l\i\8\u\v\5\u\x\w\8\h\w\i\i\1\e\h\5\1\b\c\s\n\f\v\u\c\9\j\x\9\n\w\8\u\t\z\2\n\2\b\0\v\9\z\s\1\0\u\s\8\7\k\e\i\v\f\f\z\y\1\x\k\y\6\8\p\y\f\n\a\c\r\3\c\f\a\6\i\w\i\z\u\o\y\9\d\3\e\1\x\g\u\u\g\t\8\u\1\2\n\g\7\1\p\k\n\w\y\j\2\x\w\j\y\m\p\6\g\t\6\e\m\8\t\p\c\w\0\m\8\l\4\x\1\z\o\h\u\t\w\z\r\k\l\r\v\n\j\j\5\o\b\q\t\l\c\c\v\1\g\y\t\g\u\1\m\v\0\u\f\z\l\m\w\b\o\m\2\4\w\i\b\2\g\q\y\r\e\2\b\0\d\q\j\z\z\d\k\4\k\3\8\r\3\d\9\v\x\z\5\z\o\t\1\b\9\t\x\4\w\i\y\h\8\h\3\u\s\o\o\f\b\x\9\g\7\m\x\s\6\y\0\7\u\o\7\d\s\5\p\5\1\k\4\f\z\k\5\n\n\n\j\o\i\t\g\r\u\r\j\1\p\x\q\g\a\z\p\z\c\7\5\o\1\v\r\6\5\0\i\5\h\s\v\n\l\s\9\d\n\y\h\u\a\f\p\h\p\7\7\r\w\g\o\f\7\x\e\y\4\9\r\o\y\u\n\v\i\0\w\3\j\o\l\3\n\b\1\h\s\s\o\y\i\u\s\c\0\3\m\8\o\d\1\i\c\3\l\p\4\m\r\j\y\3\x\b\j\i\8\8\3\g\4\l\a\g\p\8\v\f\d\x\v\b\f\w\u\t\9\l\2\q\s\d\m\1\3\d\t\z\5 ]] 00:14:11.284 13:32:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:14:11.284 13:32:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:14:11.284 13:32:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:14:11.284 13:32:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:14:11.284 13:32:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:14:11.284 13:32:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:14:11.284 [2024-05-15 13:32:24.207771] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:11.284 [2024-05-15 13:32:24.208092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76515 ] 00:14:11.284 [2024-05-15 13:32:24.331208] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:11.284 [2024-05-15 13:32:24.351953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.542 [2024-05-15 13:32:24.404190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.799  Copying: 512/512 [B] (average 500 kBps) 00:14:11.799 00:14:11.799 13:32:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ iu5a7ka0ffnq5o7yyu47t9cd031iwqms4ncttzkgp8wayr7plaighjc8t65ep91bvlq4gw18le54crbxzu4z6ie500tr1fn1yekhvk8khtw8ryztniyevf93h6jeq3j1thxdtf96pef0gumf2t4blifx1v495fsn8jgqwh6snpeh9o389oyvs06c9oh70u6bz0j130qgnmrnk3msx4ng44gd9v0cqk88dvu2dcntaqjra3rf1eq5y6g3161u005yihbk0mi826nx8ln1ebh3i3m2btso5cwizpkcgpju12yesh5lwgg2uj8xy3l4e0ffbdb3wnveeqb6kw87n9nn515nc08cutntd5xwe1zowns53y7a0qkqkbfj848jwpt41wal8zy7vceoyf3b9437sce1etr1ddkwhe274rjjphvbf3l2mlbldy58tr010yov76z1f3za4330inl2afukz522k8xi54a21oo5sm9qdy8tv48bnbgsi0rna8fvpys8 == \i\u\5\a\7\k\a\0\f\f\n\q\5\o\7\y\y\u\4\7\t\9\c\d\0\3\1\i\w\q\m\s\4\n\c\t\t\z\k\g\p\8\w\a\y\r\7\p\l\a\i\g\h\j\c\8\t\6\5\e\p\9\1\b\v\l\q\4\g\w\1\8\l\e\5\4\c\r\b\x\z\u\4\z\6\i\e\5\0\0\t\r\1\f\n\1\y\e\k\h\v\k\8\k\h\t\w\8\r\y\z\t\n\i\y\e\v\f\9\3\h\6\j\e\q\3\j\1\t\h\x\d\t\f\9\6\p\e\f\0\g\u\m\f\2\t\4\b\l\i\f\x\1\v\4\9\5\f\s\n\8\j\g\q\w\h\6\s\n\p\e\h\9\o\3\8\9\o\y\v\s\0\6\c\9\o\h\7\0\u\6\b\z\0\j\1\3\0\q\g\n\m\r\n\k\3\m\s\x\4\n\g\4\4\g\d\9\v\0\c\q\k\8\8\d\v\u\2\d\c\n\t\a\q\j\r\a\3\r\f\1\e\q\5\y\6\g\3\1\6\1\u\0\0\5\y\i\h\b\k\0\m\i\8\2\6\n\x\8\l\n\1\e\b\h\3\i\3\m\2\b\t\s\o\5\c\w\i\z\p\k\c\g\p\j\u\1\2\y\e\s\h\5\l\w\g\g\2\u\j\8\x\y\3\l\4\e\0\f\f\b\d\b\3\w\n\v\e\e\q\b\6\k\w\8\7\n\9\n\n\5\1\5\n\c\0\8\c\u\t\n\t\d\5\x\w\e\1\z\o\w\n\s\5\3\y\7\a\0\q\k\q\k\b\f\j\8\4\8\j\w\p\t\4\1\w\a\l\8\z\y\7\v\c\e\o\y\f\3\b\9\4\3\7\s\c\e\1\e\t\r\1\d\d\k\w\h\e\2\7\4\r\j\j\p\h\v\b\f\3\l\2\m\l\b\l\d\y\5\8\t\r\0\1\0\y\o\v\7\6\z\1\f\3\z\a\4\3\3\0\i\n\l\2\a\f\u\k\z\5\2\2\k\8\x\i\5\4\a\2\1\o\o\5\s\m\9\q\d\y\8\t\v\4\8\b\n\b\g\s\i\0\r\n\a\8\f\v\p\y\s\8 ]] 00:14:11.799 13:32:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:14:11.799 13:32:24 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:14:11.799 [2024-05-15 13:32:24.706387] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:11.799 [2024-05-15 13:32:24.706727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76523 ] 00:14:11.799 [2024-05-15 13:32:24.827619] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:11.799 [2024-05-15 13:32:24.848565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.057 [2024-05-15 13:32:24.900957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.314  Copying: 512/512 [B] (average 500 kBps) 00:14:12.314 00:14:12.314 13:32:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ iu5a7ka0ffnq5o7yyu47t9cd031iwqms4ncttzkgp8wayr7plaighjc8t65ep91bvlq4gw18le54crbxzu4z6ie500tr1fn1yekhvk8khtw8ryztniyevf93h6jeq3j1thxdtf96pef0gumf2t4blifx1v495fsn8jgqwh6snpeh9o389oyvs06c9oh70u6bz0j130qgnmrnk3msx4ng44gd9v0cqk88dvu2dcntaqjra3rf1eq5y6g3161u005yihbk0mi826nx8ln1ebh3i3m2btso5cwizpkcgpju12yesh5lwgg2uj8xy3l4e0ffbdb3wnveeqb6kw87n9nn515nc08cutntd5xwe1zowns53y7a0qkqkbfj848jwpt41wal8zy7vceoyf3b9437sce1etr1ddkwhe274rjjphvbf3l2mlbldy58tr010yov76z1f3za4330inl2afukz522k8xi54a21oo5sm9qdy8tv48bnbgsi0rna8fvpys8 == \i\u\5\a\7\k\a\0\f\f\n\q\5\o\7\y\y\u\4\7\t\9\c\d\0\3\1\i\w\q\m\s\4\n\c\t\t\z\k\g\p\8\w\a\y\r\7\p\l\a\i\g\h\j\c\8\t\6\5\e\p\9\1\b\v\l\q\4\g\w\1\8\l\e\5\4\c\r\b\x\z\u\4\z\6\i\e\5\0\0\t\r\1\f\n\1\y\e\k\h\v\k\8\k\h\t\w\8\r\y\z\t\n\i\y\e\v\f\9\3\h\6\j\e\q\3\j\1\t\h\x\d\t\f\9\6\p\e\f\0\g\u\m\f\2\t\4\b\l\i\f\x\1\v\4\9\5\f\s\n\8\j\g\q\w\h\6\s\n\p\e\h\9\o\3\8\9\o\y\v\s\0\6\c\9\o\h\7\0\u\6\b\z\0\j\1\3\0\q\g\n\m\r\n\k\3\m\s\x\4\n\g\4\4\g\d\9\v\0\c\q\k\8\8\d\v\u\2\d\c\n\t\a\q\j\r\a\3\r\f\1\e\q\5\y\6\g\3\1\6\1\u\0\0\5\y\i\h\b\k\0\m\i\8\2\6\n\x\8\l\n\1\e\b\h\3\i\3\m\2\b\t\s\o\5\c\w\i\z\p\k\c\g\p\j\u\1\2\y\e\s\h\5\l\w\g\g\2\u\j\8\x\y\3\l\4\e\0\f\f\b\d\b\3\w\n\v\e\e\q\b\6\k\w\8\7\n\9\n\n\5\1\5\n\c\0\8\c\u\t\n\t\d\5\x\w\e\1\z\o\w\n\s\5\3\y\7\a\0\q\k\q\k\b\f\j\8\4\8\j\w\p\t\4\1\w\a\l\8\z\y\7\v\c\e\o\y\f\3\b\9\4\3\7\s\c\e\1\e\t\r\1\d\d\k\w\h\e\2\7\4\r\j\j\p\h\v\b\f\3\l\2\m\l\b\l\d\y\5\8\t\r\0\1\0\y\o\v\7\6\z\1\f\3\z\a\4\3\3\0\i\n\l\2\a\f\u\k\z\5\2\2\k\8\x\i\5\4\a\2\1\o\o\5\s\m\9\q\d\y\8\t\v\4\8\b\n\b\g\s\i\0\r\n\a\8\f\v\p\y\s\8 ]] 00:14:12.314 13:32:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:14:12.314 13:32:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:14:12.314 [2024-05-15 13:32:25.212781] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:12.314 [2024-05-15 13:32:25.213100] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76530 ] 00:14:12.314 [2024-05-15 13:32:25.334063] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:12.314 [2024-05-15 13:32:25.349697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.314 [2024-05-15 13:32:25.409823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.571  Copying: 512/512 [B] (average 500 kBps) 00:14:12.571 00:14:12.828 13:32:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ iu5a7ka0ffnq5o7yyu47t9cd031iwqms4ncttzkgp8wayr7plaighjc8t65ep91bvlq4gw18le54crbxzu4z6ie500tr1fn1yekhvk8khtw8ryztniyevf93h6jeq3j1thxdtf96pef0gumf2t4blifx1v495fsn8jgqwh6snpeh9o389oyvs06c9oh70u6bz0j130qgnmrnk3msx4ng44gd9v0cqk88dvu2dcntaqjra3rf1eq5y6g3161u005yihbk0mi826nx8ln1ebh3i3m2btso5cwizpkcgpju12yesh5lwgg2uj8xy3l4e0ffbdb3wnveeqb6kw87n9nn515nc08cutntd5xwe1zowns53y7a0qkqkbfj848jwpt41wal8zy7vceoyf3b9437sce1etr1ddkwhe274rjjphvbf3l2mlbldy58tr010yov76z1f3za4330inl2afukz522k8xi54a21oo5sm9qdy8tv48bnbgsi0rna8fvpys8 == \i\u\5\a\7\k\a\0\f\f\n\q\5\o\7\y\y\u\4\7\t\9\c\d\0\3\1\i\w\q\m\s\4\n\c\t\t\z\k\g\p\8\w\a\y\r\7\p\l\a\i\g\h\j\c\8\t\6\5\e\p\9\1\b\v\l\q\4\g\w\1\8\l\e\5\4\c\r\b\x\z\u\4\z\6\i\e\5\0\0\t\r\1\f\n\1\y\e\k\h\v\k\8\k\h\t\w\8\r\y\z\t\n\i\y\e\v\f\9\3\h\6\j\e\q\3\j\1\t\h\x\d\t\f\9\6\p\e\f\0\g\u\m\f\2\t\4\b\l\i\f\x\1\v\4\9\5\f\s\n\8\j\g\q\w\h\6\s\n\p\e\h\9\o\3\8\9\o\y\v\s\0\6\c\9\o\h\7\0\u\6\b\z\0\j\1\3\0\q\g\n\m\r\n\k\3\m\s\x\4\n\g\4\4\g\d\9\v\0\c\q\k\8\8\d\v\u\2\d\c\n\t\a\q\j\r\a\3\r\f\1\e\q\5\y\6\g\3\1\6\1\u\0\0\5\y\i\h\b\k\0\m\i\8\2\6\n\x\8\l\n\1\e\b\h\3\i\3\m\2\b\t\s\o\5\c\w\i\z\p\k\c\g\p\j\u\1\2\y\e\s\h\5\l\w\g\g\2\u\j\8\x\y\3\l\4\e\0\f\f\b\d\b\3\w\n\v\e\e\q\b\6\k\w\8\7\n\9\n\n\5\1\5\n\c\0\8\c\u\t\n\t\d\5\x\w\e\1\z\o\w\n\s\5\3\y\7\a\0\q\k\q\k\b\f\j\8\4\8\j\w\p\t\4\1\w\a\l\8\z\y\7\v\c\e\o\y\f\3\b\9\4\3\7\s\c\e\1\e\t\r\1\d\d\k\w\h\e\2\7\4\r\j\j\p\h\v\b\f\3\l\2\m\l\b\l\d\y\5\8\t\r\0\1\0\y\o\v\7\6\z\1\f\3\z\a\4\3\3\0\i\n\l\2\a\f\u\k\z\5\2\2\k\8\x\i\5\4\a\2\1\o\o\5\s\m\9\q\d\y\8\t\v\4\8\b\n\b\g\s\i\0\r\n\a\8\f\v\p\y\s\8 ]] 00:14:12.829 13:32:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:14:12.829 13:32:25 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:14:12.829 [2024-05-15 13:32:25.723682] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:12.829 [2024-05-15 13:32:25.724040] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76538 ] 00:14:12.829 [2024-05-15 13:32:25.852209] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:12.829 [2024-05-15 13:32:25.866084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.086 [2024-05-15 13:32:25.934291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.344  Copying: 512/512 [B] (average 125 kBps) 00:14:13.344 00:14:13.344 13:32:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ iu5a7ka0ffnq5o7yyu47t9cd031iwqms4ncttzkgp8wayr7plaighjc8t65ep91bvlq4gw18le54crbxzu4z6ie500tr1fn1yekhvk8khtw8ryztniyevf93h6jeq3j1thxdtf96pef0gumf2t4blifx1v495fsn8jgqwh6snpeh9o389oyvs06c9oh70u6bz0j130qgnmrnk3msx4ng44gd9v0cqk88dvu2dcntaqjra3rf1eq5y6g3161u005yihbk0mi826nx8ln1ebh3i3m2btso5cwizpkcgpju12yesh5lwgg2uj8xy3l4e0ffbdb3wnveeqb6kw87n9nn515nc08cutntd5xwe1zowns53y7a0qkqkbfj848jwpt41wal8zy7vceoyf3b9437sce1etr1ddkwhe274rjjphvbf3l2mlbldy58tr010yov76z1f3za4330inl2afukz522k8xi54a21oo5sm9qdy8tv48bnbgsi0rna8fvpys8 == \i\u\5\a\7\k\a\0\f\f\n\q\5\o\7\y\y\u\4\7\t\9\c\d\0\3\1\i\w\q\m\s\4\n\c\t\t\z\k\g\p\8\w\a\y\r\7\p\l\a\i\g\h\j\c\8\t\6\5\e\p\9\1\b\v\l\q\4\g\w\1\8\l\e\5\4\c\r\b\x\z\u\4\z\6\i\e\5\0\0\t\r\1\f\n\1\y\e\k\h\v\k\8\k\h\t\w\8\r\y\z\t\n\i\y\e\v\f\9\3\h\6\j\e\q\3\j\1\t\h\x\d\t\f\9\6\p\e\f\0\g\u\m\f\2\t\4\b\l\i\f\x\1\v\4\9\5\f\s\n\8\j\g\q\w\h\6\s\n\p\e\h\9\o\3\8\9\o\y\v\s\0\6\c\9\o\h\7\0\u\6\b\z\0\j\1\3\0\q\g\n\m\r\n\k\3\m\s\x\4\n\g\4\4\g\d\9\v\0\c\q\k\8\8\d\v\u\2\d\c\n\t\a\q\j\r\a\3\r\f\1\e\q\5\y\6\g\3\1\6\1\u\0\0\5\y\i\h\b\k\0\m\i\8\2\6\n\x\8\l\n\1\e\b\h\3\i\3\m\2\b\t\s\o\5\c\w\i\z\p\k\c\g\p\j\u\1\2\y\e\s\h\5\l\w\g\g\2\u\j\8\x\y\3\l\4\e\0\f\f\b\d\b\3\w\n\v\e\e\q\b\6\k\w\8\7\n\9\n\n\5\1\5\n\c\0\8\c\u\t\n\t\d\5\x\w\e\1\z\o\w\n\s\5\3\y\7\a\0\q\k\q\k\b\f\j\8\4\8\j\w\p\t\4\1\w\a\l\8\z\y\7\v\c\e\o\y\f\3\b\9\4\3\7\s\c\e\1\e\t\r\1\d\d\k\w\h\e\2\7\4\r\j\j\p\h\v\b\f\3\l\2\m\l\b\l\d\y\5\8\t\r\0\1\0\y\o\v\7\6\z\1\f\3\z\a\4\3\3\0\i\n\l\2\a\f\u\k\z\5\2\2\k\8\x\i\5\4\a\2\1\o\o\5\s\m\9\q\d\y\8\t\v\4\8\b\n\b\g\s\i\0\r\n\a\8\f\v\p\y\s\8 ]] 00:14:13.344 00:14:13.344 real 0m4.190s 00:14:13.344 user 0m2.060s 00:14:13.344 sys 0m1.121s 00:14:13.344 13:32:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:13.344 ************************************ 00:14:13.344 END TEST dd_flags_misc_forced_aio 00:14:13.344 ************************************ 00:14:13.344 13:32:26 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:14:13.344 13:32:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:14:13.344 13:32:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:14:13.344 13:32:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:14:13.344 ************************************ 00:14:13.344 END TEST spdk_dd_posix 00:14:13.344 ************************************ 00:14:13.344 00:14:13.344 real 0m19.856s 00:14:13.344 user 0m9.018s 00:14:13.344 sys 0m6.471s 00:14:13.344 13:32:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:13.344 13:32:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:14:13.344 13:32:26 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:14:13.344 13:32:26 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:13.344 13:32:26 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:13.344 13:32:26 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:14:13.344 ************************************ 00:14:13.344 START TEST spdk_dd_malloc 00:14:13.344 ************************************ 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:14:13.344 * Looking for test storage... 00:14:13.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:14:13.344 ************************************ 00:14:13.344 START TEST dd_malloc_copy 00:14:13.344 ************************************ 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1121 -- # malloc_copy 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:13.344 13:32:26 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:14:13.603 [2024-05-15 13:32:26.454929] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:13.603 [2024-05-15 13:32:26.455258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76606 ] 00:14:13.603 { 00:14:13.603 "subsystems": [ 00:14:13.603 { 00:14:13.603 "subsystem": "bdev", 00:14:13.603 "config": [ 00:14:13.603 { 00:14:13.603 "params": { 00:14:13.603 "block_size": 512, 00:14:13.603 "num_blocks": 1048576, 00:14:13.603 "name": "malloc0" 00:14:13.603 }, 00:14:13.603 "method": "bdev_malloc_create" 00:14:13.603 }, 00:14:13.603 { 00:14:13.603 "params": { 00:14:13.603 "block_size": 512, 00:14:13.603 "num_blocks": 1048576, 00:14:13.603 "name": "malloc1" 00:14:13.603 }, 00:14:13.603 "method": "bdev_malloc_create" 00:14:13.603 }, 00:14:13.603 { 00:14:13.603 "method": "bdev_wait_for_examine" 00:14:13.603 } 00:14:13.603 ] 00:14:13.603 } 00:14:13.603 ] 00:14:13.603 } 00:14:13.603 [2024-05-15 13:32:26.575544] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:13.603 [2024-05-15 13:32:26.590565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.603 [2024-05-15 13:32:26.663630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.159  Copying: 203/512 [MB] (203 MBps) Copying: 417/512 [MB] (213 MBps) Copying: 512/512 [MB] (average 209 MBps) 00:14:17.159 00:14:17.159 13:32:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:14:17.159 13:32:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:14:17.159 13:32:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:17.159 13:32:29 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:14:17.159 [2024-05-15 13:32:30.015324] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:17.159 [2024-05-15 13:32:30.015665] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76654 ] 00:14:17.159 { 00:14:17.159 "subsystems": [ 00:14:17.159 { 00:14:17.159 "subsystem": "bdev", 00:14:17.159 "config": [ 00:14:17.159 { 00:14:17.159 "params": { 00:14:17.159 "block_size": 512, 00:14:17.159 "num_blocks": 1048576, 00:14:17.159 "name": "malloc0" 00:14:17.159 }, 00:14:17.159 "method": "bdev_malloc_create" 00:14:17.159 }, 00:14:17.159 { 00:14:17.159 "params": { 00:14:17.159 "block_size": 512, 00:14:17.159 "num_blocks": 1048576, 00:14:17.159 "name": "malloc1" 00:14:17.159 }, 00:14:17.159 "method": "bdev_malloc_create" 00:14:17.159 }, 00:14:17.159 { 00:14:17.159 "method": "bdev_wait_for_examine" 00:14:17.159 } 00:14:17.159 ] 00:14:17.159 } 00:14:17.159 ] 00:14:17.159 } 00:14:17.159 [2024-05-15 13:32:30.136534] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:17.159 [2024-05-15 13:32:30.148556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.159 [2024-05-15 13:32:30.202451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.651  Copying: 205/512 [MB] (205 MBps) Copying: 415/512 [MB] (210 MBps) Copying: 512/512 [MB] (average 208 MBps) 00:14:20.651 00:14:20.651 ************************************ 00:14:20.651 END TEST dd_malloc_copy 00:14:20.651 ************************************ 00:14:20.651 00:14:20.651 real 0m7.069s 00:14:20.651 user 0m6.154s 00:14:20.651 sys 0m0.745s 00:14:20.651 13:32:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:20.651 13:32:33 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:14:20.651 ************************************ 00:14:20.651 END TEST spdk_dd_malloc 00:14:20.651 ************************************ 00:14:20.651 00:14:20.651 real 0m7.223s 00:14:20.651 user 0m6.215s 00:14:20.651 sys 0m0.838s 00:14:20.651 13:32:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:20.651 13:32:33 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:14:20.651 13:32:33 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:14:20.651 13:32:33 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:14:20.651 13:32:33 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:20.651 13:32:33 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:14:20.651 ************************************ 00:14:20.651 START TEST spdk_dd_bdev_to_bdev 00:14:20.651 ************************************ 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:14:20.651 * Looking for test storage... 00:14:20.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:14:20.651 ************************************ 00:14:20.651 START TEST dd_inflate_file 00:14:20.651 ************************************ 00:14:20.651 13:32:33 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:14:20.651 [2024-05-15 13:32:33.745897] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:20.651 [2024-05-15 13:32:33.746283] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76758 ] 00:14:20.907 [2024-05-15 13:32:33.875611] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:20.907 [2024-05-15 13:32:33.889481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.907 [2024-05-15 13:32:33.945222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.164  Copying: 64/64 [MB] (average 1454 MBps) 00:14:21.164 00:14:21.164 ************************************ 00:14:21.164 END TEST dd_inflate_file 00:14:21.164 ************************************ 00:14:21.164 00:14:21.164 real 0m0.537s 00:14:21.164 user 0m0.293s 00:14:21.164 sys 0m0.287s 00:14:21.164 13:32:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:21.164 13:32:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:14:21.422 13:32:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:14:21.422 13:32:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:14:21.422 13:32:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:14:21.422 13:32:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:14:21.422 13:32:34 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:14:21.422 13:32:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:14:21.422 13:32:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:14:21.422 13:32:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:21.422 13:32:34 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:14:21.422 ************************************ 00:14:21.422 START TEST dd_copy_to_out_bdev 00:14:21.422 ************************************ 00:14:21.422 13:32:34 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:14:21.422 [2024-05-15 13:32:34.330300] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:21.422 [2024-05-15 13:32:34.330598] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76792 ] 00:14:21.422 { 00:14:21.422 "subsystems": [ 00:14:21.422 { 00:14:21.422 "subsystem": "bdev", 00:14:21.422 "config": [ 00:14:21.422 { 00:14:21.422 "params": { 00:14:21.422 "trtype": "pcie", 00:14:21.422 "traddr": "0000:00:10.0", 00:14:21.422 "name": "Nvme0" 00:14:21.422 }, 00:14:21.422 "method": "bdev_nvme_attach_controller" 00:14:21.422 }, 00:14:21.422 { 00:14:21.422 "params": { 00:14:21.422 "trtype": "pcie", 00:14:21.422 "traddr": "0000:00:11.0", 00:14:21.422 "name": "Nvme1" 00:14:21.422 }, 00:14:21.422 "method": "bdev_nvme_attach_controller" 00:14:21.422 }, 00:14:21.422 { 00:14:21.422 "method": "bdev_wait_for_examine" 00:14:21.422 } 00:14:21.422 ] 00:14:21.422 } 00:14:21.422 ] 00:14:21.422 } 00:14:21.422 [2024-05-15 13:32:34.452201] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:21.422 [2024-05-15 13:32:34.468227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.679 [2024-05-15 13:32:34.545532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.871  Copying: 64/64 [MB] (average 75 MBps) 00:14:22.871 00:14:22.871 ************************************ 00:14:22.871 END TEST dd_copy_to_out_bdev 00:14:22.871 ************************************ 00:14:22.871 00:14:22.871 real 0m1.547s 00:14:22.871 user 0m1.312s 00:14:22.871 sys 0m1.192s 00:14:22.871 13:32:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:22.871 13:32:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:14:22.871 13:32:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:14:22.871 13:32:35 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:14:22.871 13:32:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:22.871 13:32:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:22.871 13:32:35 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:14:22.871 ************************************ 00:14:22.871 START TEST dd_offset_magic 00:14:22.871 ************************************ 00:14:22.871 13:32:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1121 -- # offset_magic 00:14:22.871 13:32:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:14:22.871 13:32:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:14:22.871 13:32:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:14:22.871 13:32:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:14:22.871 13:32:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:14:22.871 13:32:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:14:22.871 13:32:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:14:22.871 13:32:35 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:14:22.871 [2024-05-15 13:32:35.931215] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:22.871 [2024-05-15 13:32:35.932127] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76837 ] 00:14:22.871 { 00:14:22.871 "subsystems": [ 00:14:22.871 { 00:14:22.871 "subsystem": "bdev", 00:14:22.871 "config": [ 00:14:22.871 { 00:14:22.871 "params": { 00:14:22.871 "trtype": "pcie", 00:14:22.871 "traddr": "0000:00:10.0", 00:14:22.871 "name": "Nvme0" 00:14:22.871 }, 00:14:22.871 "method": "bdev_nvme_attach_controller" 00:14:22.871 }, 00:14:22.871 { 00:14:22.871 "params": { 00:14:22.871 "trtype": "pcie", 00:14:22.871 "traddr": "0000:00:11.0", 00:14:22.871 "name": "Nvme1" 00:14:22.871 }, 00:14:22.871 "method": "bdev_nvme_attach_controller" 00:14:22.871 }, 00:14:22.871 { 00:14:22.872 "method": "bdev_wait_for_examine" 00:14:22.872 } 00:14:22.872 ] 00:14:22.872 } 00:14:22.872 ] 00:14:22.872 } 00:14:23.129 [2024-05-15 13:32:36.055194] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:23.129 [2024-05-15 13:32:36.071763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.129 [2024-05-15 13:32:36.132530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.646  Copying: 65/65 [MB] (average 1160 MBps) 00:14:23.646 00:14:23.646 13:32:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:14:23.646 13:32:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:14:23.646 13:32:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:14:23.646 13:32:36 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:14:23.646 [2024-05-15 13:32:36.666275] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:23.646 [2024-05-15 13:32:36.666641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76846 ] 00:14:23.646 { 00:14:23.646 "subsystems": [ 00:14:23.646 { 00:14:23.646 "subsystem": "bdev", 00:14:23.646 "config": [ 00:14:23.646 { 00:14:23.646 "params": { 00:14:23.646 "trtype": "pcie", 00:14:23.646 "traddr": "0000:00:10.0", 00:14:23.646 "name": "Nvme0" 00:14:23.646 }, 00:14:23.646 "method": "bdev_nvme_attach_controller" 00:14:23.646 }, 00:14:23.646 { 00:14:23.646 "params": { 00:14:23.646 "trtype": "pcie", 00:14:23.646 "traddr": "0000:00:11.0", 00:14:23.646 "name": "Nvme1" 00:14:23.646 }, 00:14:23.646 "method": "bdev_nvme_attach_controller" 00:14:23.646 }, 00:14:23.646 { 00:14:23.646 "method": "bdev_wait_for_examine" 00:14:23.646 } 00:14:23.646 ] 00:14:23.646 } 00:14:23.646 ] 00:14:23.646 } 00:14:23.904 [2024-05-15 13:32:36.793296] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:23.904 [2024-05-15 13:32:36.813582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.904 [2024-05-15 13:32:36.870657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.162  Copying: 1024/1024 [kB] (average 1000 MBps) 00:14:24.162 00:14:24.162 13:32:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:14:24.162 13:32:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:14:24.162 13:32:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:14:24.162 13:32:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:14:24.162 13:32:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:14:24.162 13:32:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:14:24.162 13:32:37 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:14:24.420 { 00:14:24.420 "subsystems": [ 00:14:24.420 { 00:14:24.420 "subsystem": "bdev", 00:14:24.420 "config": [ 00:14:24.420 { 00:14:24.420 "params": { 00:14:24.420 "trtype": "pcie", 00:14:24.420 "traddr": "0000:00:10.0", 00:14:24.420 "name": "Nvme0" 00:14:24.420 }, 00:14:24.420 "method": "bdev_nvme_attach_controller" 00:14:24.420 }, 00:14:24.420 { 00:14:24.420 "params": { 00:14:24.420 "trtype": "pcie", 00:14:24.420 "traddr": "0000:00:11.0", 00:14:24.420 "name": "Nvme1" 00:14:24.420 }, 00:14:24.420 "method": "bdev_nvme_attach_controller" 00:14:24.420 }, 00:14:24.420 { 00:14:24.420 "method": "bdev_wait_for_examine" 00:14:24.420 } 00:14:24.420 ] 00:14:24.420 } 00:14:24.420 ] 00:14:24.420 } 00:14:24.420 [2024-05-15 13:32:37.317444] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:24.420 [2024-05-15 13:32:37.317957] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76868 ] 00:14:24.420 [2024-05-15 13:32:37.455136] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:24.420 [2024-05-15 13:32:37.473928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.678 [2024-05-15 13:32:37.548746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.937  Copying: 65/65 [MB] (average 1274 MBps) 00:14:24.937 00:14:25.195 13:32:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:14:25.195 13:32:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:14:25.195 13:32:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:14:25.195 13:32:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:14:25.195 { 00:14:25.195 "subsystems": [ 00:14:25.195 { 00:14:25.195 "subsystem": "bdev", 00:14:25.195 "config": [ 00:14:25.195 { 00:14:25.195 "params": { 00:14:25.195 "trtype": "pcie", 00:14:25.195 "traddr": "0000:00:10.0", 00:14:25.195 "name": "Nvme0" 00:14:25.195 }, 00:14:25.195 "method": "bdev_nvme_attach_controller" 00:14:25.195 }, 00:14:25.195 { 00:14:25.195 "params": { 00:14:25.195 "trtype": "pcie", 00:14:25.195 "traddr": "0000:00:11.0", 00:14:25.195 "name": "Nvme1" 00:14:25.195 }, 00:14:25.195 "method": "bdev_nvme_attach_controller" 00:14:25.195 }, 00:14:25.195 { 00:14:25.195 "method": "bdev_wait_for_examine" 00:14:25.195 } 00:14:25.195 ] 00:14:25.195 } 00:14:25.195 ] 00:14:25.195 } 00:14:25.195 [2024-05-15 13:32:38.096026] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:25.195 [2024-05-15 13:32:38.096374] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76888 ] 00:14:25.195 [2024-05-15 13:32:38.225817] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:25.195 [2024-05-15 13:32:38.243129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.453 [2024-05-15 13:32:38.295716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.712  Copying: 1024/1024 [kB] (average 500 MBps) 00:14:25.712 00:14:25.712 13:32:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:14:25.712 13:32:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:14:25.712 00:14:25.712 real 0m2.797s 00:14:25.712 user 0m1.894s 00:14:25.712 sys 0m0.901s 00:14:25.712 13:32:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:25.712 13:32:38 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:14:25.712 ************************************ 00:14:25.712 END TEST dd_offset_magic 00:14:25.712 ************************************ 00:14:25.712 13:32:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:14:25.712 13:32:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:14:25.712 13:32:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:14:25.712 13:32:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:14:25.712 13:32:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:14:25.712 13:32:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:14:25.712 13:32:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:14:25.712 13:32:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:14:25.712 13:32:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:14:25.712 13:32:38 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:14:25.712 13:32:38 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:14:25.712 [2024-05-15 13:32:38.771826] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:25.712 [2024-05-15 13:32:38.772205] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76914 ] 00:14:25.712 { 00:14:25.712 "subsystems": [ 00:14:25.712 { 00:14:25.712 "subsystem": "bdev", 00:14:25.712 "config": [ 00:14:25.712 { 00:14:25.712 "params": { 00:14:25.712 "trtype": "pcie", 00:14:25.712 "traddr": "0000:00:10.0", 00:14:25.712 "name": "Nvme0" 00:14:25.712 }, 00:14:25.712 "method": "bdev_nvme_attach_controller" 00:14:25.712 }, 00:14:25.712 { 00:14:25.712 "params": { 00:14:25.712 "trtype": "pcie", 00:14:25.712 "traddr": "0000:00:11.0", 00:14:25.712 "name": "Nvme1" 00:14:25.712 }, 00:14:25.712 "method": "bdev_nvme_attach_controller" 00:14:25.712 }, 00:14:25.712 { 00:14:25.712 "method": "bdev_wait_for_examine" 00:14:25.712 } 00:14:25.712 ] 00:14:25.712 } 00:14:25.712 ] 00:14:25.712 } 00:14:25.970 [2024-05-15 13:32:38.899470] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:25.970 [2024-05-15 13:32:38.913298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.970 [2024-05-15 13:32:38.979744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.486  Copying: 5120/5120 [kB] (average 1666 MBps) 00:14:26.486 00:14:26.486 13:32:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:14:26.486 13:32:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:14:26.486 13:32:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:14:26.486 13:32:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:14:26.486 13:32:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:14:26.486 13:32:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:14:26.486 13:32:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:14:26.486 13:32:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:14:26.486 13:32:39 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:14:26.486 13:32:39 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:14:26.486 [2024-05-15 13:32:39.418331] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:26.486 [2024-05-15 13:32:39.419379] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76935 ] 00:14:26.486 { 00:14:26.486 "subsystems": [ 00:14:26.486 { 00:14:26.486 "subsystem": "bdev", 00:14:26.486 "config": [ 00:14:26.486 { 00:14:26.486 "params": { 00:14:26.486 "trtype": "pcie", 00:14:26.486 "traddr": "0000:00:10.0", 00:14:26.486 "name": "Nvme0" 00:14:26.486 }, 00:14:26.486 "method": "bdev_nvme_attach_controller" 00:14:26.486 }, 00:14:26.486 { 00:14:26.486 "params": { 00:14:26.486 "trtype": "pcie", 00:14:26.486 "traddr": "0000:00:11.0", 00:14:26.486 "name": "Nvme1" 00:14:26.486 }, 00:14:26.486 "method": "bdev_nvme_attach_controller" 00:14:26.486 }, 00:14:26.486 { 00:14:26.486 "method": "bdev_wait_for_examine" 00:14:26.486 } 00:14:26.486 ] 00:14:26.486 } 00:14:26.486 ] 00:14:26.486 } 00:14:26.486 [2024-05-15 13:32:39.548977] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:26.486 [2024-05-15 13:32:39.569560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.744 [2024-05-15 13:32:39.637612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.032  Copying: 5120/5120 [kB] (average 1000 MBps) 00:14:27.032 00:14:27.032 13:32:40 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:14:27.032 ************************************ 00:14:27.032 END TEST spdk_dd_bdev_to_bdev 00:14:27.032 ************************************ 00:14:27.032 00:14:27.032 real 0m6.507s 00:14:27.032 user 0m4.507s 00:14:27.032 sys 0m3.053s 00:14:27.032 13:32:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:27.032 13:32:40 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:14:27.032 13:32:40 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:14:27.032 13:32:40 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:14:27.032 13:32:40 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:27.032 13:32:40 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:27.032 13:32:40 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:14:27.292 ************************************ 00:14:27.292 START TEST spdk_dd_uring 00:14:27.292 ************************************ 00:14:27.292 13:32:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:14:27.292 * Looking for test storage... 00:14:27.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:14:27.292 13:32:40 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:27.292 13:32:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.292 13:32:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.292 13:32:40 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.292 13:32:40 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:14:27.293 ************************************ 00:14:27.293 START TEST dd_uring_copy 00:14:27.293 ************************************ 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1121 -- # uring_zram_copy 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=gy2g4jlvbuasgt05zdfojqyfpuhn618cml9gytkff7qrswj07ufjr7b8ds7iyiwnquow83c6chol5gy596at26cjtrw8osgbkvu1wo1lp0oj112lnzqanvbc7wdxoz48w43svu1efitx730cbks91ukhsqm6rxi9zd8nx1vq2oz9p3p92e5sr7gtnaqn9bsanxnu8q7wayt3btpu18xwzz15712zjqmgt5vg7iofkiz1z29835eb8ry62k7mq9cn3qaj1h4krvtt27eqp9rsw1pz4e3s0u060mef1pblkbg7e2cmbh3x8f7y494rk5nadpctogk0mbqw5u4xyw387tpnzukc6147p1a6vxzp69xode8iusskp6h3uw3xwdtiw67pyhvv5abj2ir5pye7y1xo2vprr8s16yri7lctrcxtdswyodrqg9dum6rlb60uh0iqk6zk769d8kt5hfwmrdsa2k9loajhepmhj02vv9kansqz6tvvwykgykwscejw10yppw7l5r46nfrhplpvr5ukw7rx7qdee26o9p2n0zxf7bxhssz0740zrfc98hwg0q8fivrvzwgdboma2qdh45bytds9driqlwoj5ldm65t7r896b4o0m8h46hqzwtk11m3nrwihm7o2mt6z00r5y7y193d1cjuozruazq1dn1ct2whlnqjv02d5aarcb6kzzwv4jxfi531781c9w567yvia7rt6ytph82wnlmxiafdkqo6ifgna8coc2rf0ucdo3szltyzmrfhwxvrd0ce9aj2bmdx7qq08ywmi3ls6jjmx2yooh995oer881yjdks1lzwsy3h0co0dv7seggyfq15y0og6sq32m2493lez7p21j2jn1y88ein53g5mybwsna1kiupjdr7nl2r625lklxifkok6m6wyl4jvhuojltue7cb3zex32rbghocyu9wh2isycdqm80sg9o4a178nzoyhc99ia8xpcokd0tb6ee4bvdguorl4506sx8ev0zs7 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo gy2g4jlvbuasgt05zdfojqyfpuhn618cml9gytkff7qrswj07ufjr7b8ds7iyiwnquow83c6chol5gy596at26cjtrw8osgbkvu1wo1lp0oj112lnzqanvbc7wdxoz48w43svu1efitx730cbks91ukhsqm6rxi9zd8nx1vq2oz9p3p92e5sr7gtnaqn9bsanxnu8q7wayt3btpu18xwzz15712zjqmgt5vg7iofkiz1z29835eb8ry62k7mq9cn3qaj1h4krvtt27eqp9rsw1pz4e3s0u060mef1pblkbg7e2cmbh3x8f7y494rk5nadpctogk0mbqw5u4xyw387tpnzukc6147p1a6vxzp69xode8iusskp6h3uw3xwdtiw67pyhvv5abj2ir5pye7y1xo2vprr8s16yri7lctrcxtdswyodrqg9dum6rlb60uh0iqk6zk769d8kt5hfwmrdsa2k9loajhepmhj02vv9kansqz6tvvwykgykwscejw10yppw7l5r46nfrhplpvr5ukw7rx7qdee26o9p2n0zxf7bxhssz0740zrfc98hwg0q8fivrvzwgdboma2qdh45bytds9driqlwoj5ldm65t7r896b4o0m8h46hqzwtk11m3nrwihm7o2mt6z00r5y7y193d1cjuozruazq1dn1ct2whlnqjv02d5aarcb6kzzwv4jxfi531781c9w567yvia7rt6ytph82wnlmxiafdkqo6ifgna8coc2rf0ucdo3szltyzmrfhwxvrd0ce9aj2bmdx7qq08ywmi3ls6jjmx2yooh995oer881yjdks1lzwsy3h0co0dv7seggyfq15y0og6sq32m2493lez7p21j2jn1y88ein53g5mybwsna1kiupjdr7nl2r625lklxifkok6m6wyl4jvhuojltue7cb3zex32rbghocyu9wh2isycdqm80sg9o4a178nzoyhc99ia8xpcokd0tb6ee4bvdguorl4506sx8ev0zs7 00:14:27.293 13:32:40 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:14:27.293 [2024-05-15 13:32:40.312263] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:27.293 [2024-05-15 13:32:40.312567] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77005 ] 00:14:27.551 [2024-05-15 13:32:40.433438] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:27.551 [2024-05-15 13:32:40.451282] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.551 [2024-05-15 13:32:40.506733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.683  Copying: 511/511 [MB] (average 1273 MBps) 00:14:28.683 00:14:28.683 13:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:14:28.683 13:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:14:28.683 13:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:28.683 13:32:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:14:28.684 [2024-05-15 13:32:41.532760] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:28.684 [2024-05-15 13:32:41.532850] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77021 ] 00:14:28.684 { 00:14:28.684 "subsystems": [ 00:14:28.684 { 00:14:28.684 "subsystem": "bdev", 00:14:28.684 "config": [ 00:14:28.684 { 00:14:28.684 "params": { 00:14:28.684 "block_size": 512, 00:14:28.684 "num_blocks": 1048576, 00:14:28.684 "name": "malloc0" 00:14:28.684 }, 00:14:28.684 "method": "bdev_malloc_create" 00:14:28.684 }, 00:14:28.684 { 00:14:28.684 "params": { 00:14:28.684 "filename": "/dev/zram1", 00:14:28.684 "name": "uring0" 00:14:28.684 }, 00:14:28.684 "method": "bdev_uring_create" 00:14:28.684 }, 00:14:28.684 { 00:14:28.684 "method": "bdev_wait_for_examine" 00:14:28.684 } 00:14:28.684 ] 00:14:28.684 } 00:14:28.684 ] 00:14:28.684 } 00:14:28.684 [2024-05-15 13:32:41.653312] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:28.684 [2024-05-15 13:32:41.664901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.684 [2024-05-15 13:32:41.718000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.507  Copying: 239/512 [MB] (239 MBps) Copying: 475/512 [MB] (236 MBps) Copying: 512/512 [MB] (average 236 MBps) 00:14:31.507 00:14:31.508 13:32:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:14:31.508 13:32:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:14:31.508 13:32:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:31.508 13:32:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:14:31.508 [2024-05-15 13:32:44.490878] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:31.508 [2024-05-15 13:32:44.490985] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77066 ] 00:14:31.508 { 00:14:31.508 "subsystems": [ 00:14:31.508 { 00:14:31.508 "subsystem": "bdev", 00:14:31.508 "config": [ 00:14:31.508 { 00:14:31.508 "params": { 00:14:31.508 "block_size": 512, 00:14:31.508 "num_blocks": 1048576, 00:14:31.508 "name": "malloc0" 00:14:31.508 }, 00:14:31.508 "method": "bdev_malloc_create" 00:14:31.508 }, 00:14:31.508 { 00:14:31.508 "params": { 00:14:31.508 "filename": "/dev/zram1", 00:14:31.508 "name": "uring0" 00:14:31.508 }, 00:14:31.508 "method": "bdev_uring_create" 00:14:31.508 }, 00:14:31.508 { 00:14:31.508 "method": "bdev_wait_for_examine" 00:14:31.508 } 00:14:31.508 ] 00:14:31.508 } 00:14:31.508 ] 00:14:31.508 } 00:14:31.765 [2024-05-15 13:32:44.614207] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:31.765 [2024-05-15 13:32:44.630925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.765 [2024-05-15 13:32:44.701186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.947  Copying: 175/512 [MB] (175 MBps) Copying: 397/512 [MB] (222 MBps) Copying: 512/512 [MB] (average 204 MBps) 00:14:34.947 00:14:34.947 13:32:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:14:34.947 13:32:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ gy2g4jlvbuasgt05zdfojqyfpuhn618cml9gytkff7qrswj07ufjr7b8ds7iyiwnquow83c6chol5gy596at26cjtrw8osgbkvu1wo1lp0oj112lnzqanvbc7wdxoz48w43svu1efitx730cbks91ukhsqm6rxi9zd8nx1vq2oz9p3p92e5sr7gtnaqn9bsanxnu8q7wayt3btpu18xwzz15712zjqmgt5vg7iofkiz1z29835eb8ry62k7mq9cn3qaj1h4krvtt27eqp9rsw1pz4e3s0u060mef1pblkbg7e2cmbh3x8f7y494rk5nadpctogk0mbqw5u4xyw387tpnzukc6147p1a6vxzp69xode8iusskp6h3uw3xwdtiw67pyhvv5abj2ir5pye7y1xo2vprr8s16yri7lctrcxtdswyodrqg9dum6rlb60uh0iqk6zk769d8kt5hfwmrdsa2k9loajhepmhj02vv9kansqz6tvvwykgykwscejw10yppw7l5r46nfrhplpvr5ukw7rx7qdee26o9p2n0zxf7bxhssz0740zrfc98hwg0q8fivrvzwgdboma2qdh45bytds9driqlwoj5ldm65t7r896b4o0m8h46hqzwtk11m3nrwihm7o2mt6z00r5y7y193d1cjuozruazq1dn1ct2whlnqjv02d5aarcb6kzzwv4jxfi531781c9w567yvia7rt6ytph82wnlmxiafdkqo6ifgna8coc2rf0ucdo3szltyzmrfhwxvrd0ce9aj2bmdx7qq08ywmi3ls6jjmx2yooh995oer881yjdks1lzwsy3h0co0dv7seggyfq15y0og6sq32m2493lez7p21j2jn1y88ein53g5mybwsna1kiupjdr7nl2r625lklxifkok6m6wyl4jvhuojltue7cb3zex32rbghocyu9wh2isycdqm80sg9o4a178nzoyhc99ia8xpcokd0tb6ee4bvdguorl4506sx8ev0zs7 == \g\y\2\g\4\j\l\v\b\u\a\s\g\t\0\5\z\d\f\o\j\q\y\f\p\u\h\n\6\1\8\c\m\l\9\g\y\t\k\f\f\7\q\r\s\w\j\0\7\u\f\j\r\7\b\8\d\s\7\i\y\i\w\n\q\u\o\w\8\3\c\6\c\h\o\l\5\g\y\5\9\6\a\t\2\6\c\j\t\r\w\8\o\s\g\b\k\v\u\1\w\o\1\l\p\0\o\j\1\1\2\l\n\z\q\a\n\v\b\c\7\w\d\x\o\z\4\8\w\4\3\s\v\u\1\e\f\i\t\x\7\3\0\c\b\k\s\9\1\u\k\h\s\q\m\6\r\x\i\9\z\d\8\n\x\1\v\q\2\o\z\9\p\3\p\9\2\e\5\s\r\7\g\t\n\a\q\n\9\b\s\a\n\x\n\u\8\q\7\w\a\y\t\3\b\t\p\u\1\8\x\w\z\z\1\5\7\1\2\z\j\q\m\g\t\5\v\g\7\i\o\f\k\i\z\1\z\2\9\8\3\5\e\b\8\r\y\6\2\k\7\m\q\9\c\n\3\q\a\j\1\h\4\k\r\v\t\t\2\7\e\q\p\9\r\s\w\1\p\z\4\e\3\s\0\u\0\6\0\m\e\f\1\p\b\l\k\b\g\7\e\2\c\m\b\h\3\x\8\f\7\y\4\9\4\r\k\5\n\a\d\p\c\t\o\g\k\0\m\b\q\w\5\u\4\x\y\w\3\8\7\t\p\n\z\u\k\c\6\1\4\7\p\1\a\6\v\x\z\p\6\9\x\o\d\e\8\i\u\s\s\k\p\6\h\3\u\w\3\x\w\d\t\i\w\6\7\p\y\h\v\v\5\a\b\j\2\i\r\5\p\y\e\7\y\1\x\o\2\v\p\r\r\8\s\1\6\y\r\i\7\l\c\t\r\c\x\t\d\s\w\y\o\d\r\q\g\9\d\u\m\6\r\l\b\6\0\u\h\0\i\q\k\6\z\k\7\6\9\d\8\k\t\5\h\f\w\m\r\d\s\a\2\k\9\l\o\a\j\h\e\p\m\h\j\0\2\v\v\9\k\a\n\s\q\z\6\t\v\v\w\y\k\g\y\k\w\s\c\e\j\w\1\0\y\p\p\w\7\l\5\r\4\6\n\f\r\h\p\l\p\v\r\5\u\k\w\7\r\x\7\q\d\e\e\2\6\o\9\p\2\n\0\z\x\f\7\b\x\h\s\s\z\0\7\4\0\z\r\f\c\9\8\h\w\g\0\q\8\f\i\v\r\v\z\w\g\d\b\o\m\a\2\q\d\h\4\5\b\y\t\d\s\9\d\r\i\q\l\w\o\j\5\l\d\m\6\5\t\7\r\8\9\6\b\4\o\0\m\8\h\4\6\h\q\z\w\t\k\1\1\m\3\n\r\w\i\h\m\7\o\2\m\t\6\z\0\0\r\5\y\7\y\1\9\3\d\1\c\j\u\o\z\r\u\a\z\q\1\d\n\1\c\t\2\w\h\l\n\q\j\v\0\2\d\5\a\a\r\c\b\6\k\z\z\w\v\4\j\x\f\i\5\3\1\7\8\1\c\9\w\5\6\7\y\v\i\a\7\r\t\6\y\t\p\h\8\2\w\n\l\m\x\i\a\f\d\k\q\o\6\i\f\g\n\a\8\c\o\c\2\r\f\0\u\c\d\o\3\s\z\l\t\y\z\m\r\f\h\w\x\v\r\d\0\c\e\9\a\j\2\b\m\d\x\7\q\q\0\8\y\w\m\i\3\l\s\6\j\j\m\x\2\y\o\o\h\9\9\5\o\e\r\8\8\1\y\j\d\k\s\1\l\z\w\s\y\3\h\0\c\o\0\d\v\7\s\e\g\g\y\f\q\1\5\y\0\o\g\6\s\q\3\2\m\2\4\9\3\l\e\z\7\p\2\1\j\2\j\n\1\y\8\8\e\i\n\5\3\g\5\m\y\b\w\s\n\a\1\k\i\u\p\j\d\r\7\n\l\2\r\6\2\5\l\k\l\x\i\f\k\o\k\6\m\6\w\y\l\4\j\v\h\u\o\j\l\t\u\e\7\c\b\3\z\e\x\3\2\r\b\g\h\o\c\y\u\9\w\h\2\i\s\y\c\d\q\m\8\0\s\g\9\o\4\a\1\7\8\n\z\o\y\h\c\9\9\i\a\8\x\p\c\o\k\d\0\t\b\6\e\e\4\b\v\d\g\u\o\r\l\4\5\0\6\s\x\8\e\v\0\z\s\7 ]] 00:14:34.947 13:32:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:14:34.948 13:32:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ gy2g4jlvbuasgt05zdfojqyfpuhn618cml9gytkff7qrswj07ufjr7b8ds7iyiwnquow83c6chol5gy596at26cjtrw8osgbkvu1wo1lp0oj112lnzqanvbc7wdxoz48w43svu1efitx730cbks91ukhsqm6rxi9zd8nx1vq2oz9p3p92e5sr7gtnaqn9bsanxnu8q7wayt3btpu18xwzz15712zjqmgt5vg7iofkiz1z29835eb8ry62k7mq9cn3qaj1h4krvtt27eqp9rsw1pz4e3s0u060mef1pblkbg7e2cmbh3x8f7y494rk5nadpctogk0mbqw5u4xyw387tpnzukc6147p1a6vxzp69xode8iusskp6h3uw3xwdtiw67pyhvv5abj2ir5pye7y1xo2vprr8s16yri7lctrcxtdswyodrqg9dum6rlb60uh0iqk6zk769d8kt5hfwmrdsa2k9loajhepmhj02vv9kansqz6tvvwykgykwscejw10yppw7l5r46nfrhplpvr5ukw7rx7qdee26o9p2n0zxf7bxhssz0740zrfc98hwg0q8fivrvzwgdboma2qdh45bytds9driqlwoj5ldm65t7r896b4o0m8h46hqzwtk11m3nrwihm7o2mt6z00r5y7y193d1cjuozruazq1dn1ct2whlnqjv02d5aarcb6kzzwv4jxfi531781c9w567yvia7rt6ytph82wnlmxiafdkqo6ifgna8coc2rf0ucdo3szltyzmrfhwxvrd0ce9aj2bmdx7qq08ywmi3ls6jjmx2yooh995oer881yjdks1lzwsy3h0co0dv7seggyfq15y0og6sq32m2493lez7p21j2jn1y88ein53g5mybwsna1kiupjdr7nl2r625lklxifkok6m6wyl4jvhuojltue7cb3zex32rbghocyu9wh2isycdqm80sg9o4a178nzoyhc99ia8xpcokd0tb6ee4bvdguorl4506sx8ev0zs7 == \g\y\2\g\4\j\l\v\b\u\a\s\g\t\0\5\z\d\f\o\j\q\y\f\p\u\h\n\6\1\8\c\m\l\9\g\y\t\k\f\f\7\q\r\s\w\j\0\7\u\f\j\r\7\b\8\d\s\7\i\y\i\w\n\q\u\o\w\8\3\c\6\c\h\o\l\5\g\y\5\9\6\a\t\2\6\c\j\t\r\w\8\o\s\g\b\k\v\u\1\w\o\1\l\p\0\o\j\1\1\2\l\n\z\q\a\n\v\b\c\7\w\d\x\o\z\4\8\w\4\3\s\v\u\1\e\f\i\t\x\7\3\0\c\b\k\s\9\1\u\k\h\s\q\m\6\r\x\i\9\z\d\8\n\x\1\v\q\2\o\z\9\p\3\p\9\2\e\5\s\r\7\g\t\n\a\q\n\9\b\s\a\n\x\n\u\8\q\7\w\a\y\t\3\b\t\p\u\1\8\x\w\z\z\1\5\7\1\2\z\j\q\m\g\t\5\v\g\7\i\o\f\k\i\z\1\z\2\9\8\3\5\e\b\8\r\y\6\2\k\7\m\q\9\c\n\3\q\a\j\1\h\4\k\r\v\t\t\2\7\e\q\p\9\r\s\w\1\p\z\4\e\3\s\0\u\0\6\0\m\e\f\1\p\b\l\k\b\g\7\e\2\c\m\b\h\3\x\8\f\7\y\4\9\4\r\k\5\n\a\d\p\c\t\o\g\k\0\m\b\q\w\5\u\4\x\y\w\3\8\7\t\p\n\z\u\k\c\6\1\4\7\p\1\a\6\v\x\z\p\6\9\x\o\d\e\8\i\u\s\s\k\p\6\h\3\u\w\3\x\w\d\t\i\w\6\7\p\y\h\v\v\5\a\b\j\2\i\r\5\p\y\e\7\y\1\x\o\2\v\p\r\r\8\s\1\6\y\r\i\7\l\c\t\r\c\x\t\d\s\w\y\o\d\r\q\g\9\d\u\m\6\r\l\b\6\0\u\h\0\i\q\k\6\z\k\7\6\9\d\8\k\t\5\h\f\w\m\r\d\s\a\2\k\9\l\o\a\j\h\e\p\m\h\j\0\2\v\v\9\k\a\n\s\q\z\6\t\v\v\w\y\k\g\y\k\w\s\c\e\j\w\1\0\y\p\p\w\7\l\5\r\4\6\n\f\r\h\p\l\p\v\r\5\u\k\w\7\r\x\7\q\d\e\e\2\6\o\9\p\2\n\0\z\x\f\7\b\x\h\s\s\z\0\7\4\0\z\r\f\c\9\8\h\w\g\0\q\8\f\i\v\r\v\z\w\g\d\b\o\m\a\2\q\d\h\4\5\b\y\t\d\s\9\d\r\i\q\l\w\o\j\5\l\d\m\6\5\t\7\r\8\9\6\b\4\o\0\m\8\h\4\6\h\q\z\w\t\k\1\1\m\3\n\r\w\i\h\m\7\o\2\m\t\6\z\0\0\r\5\y\7\y\1\9\3\d\1\c\j\u\o\z\r\u\a\z\q\1\d\n\1\c\t\2\w\h\l\n\q\j\v\0\2\d\5\a\a\r\c\b\6\k\z\z\w\v\4\j\x\f\i\5\3\1\7\8\1\c\9\w\5\6\7\y\v\i\a\7\r\t\6\y\t\p\h\8\2\w\n\l\m\x\i\a\f\d\k\q\o\6\i\f\g\n\a\8\c\o\c\2\r\f\0\u\c\d\o\3\s\z\l\t\y\z\m\r\f\h\w\x\v\r\d\0\c\e\9\a\j\2\b\m\d\x\7\q\q\0\8\y\w\m\i\3\l\s\6\j\j\m\x\2\y\o\o\h\9\9\5\o\e\r\8\8\1\y\j\d\k\s\1\l\z\w\s\y\3\h\0\c\o\0\d\v\7\s\e\g\g\y\f\q\1\5\y\0\o\g\6\s\q\3\2\m\2\4\9\3\l\e\z\7\p\2\1\j\2\j\n\1\y\8\8\e\i\n\5\3\g\5\m\y\b\w\s\n\a\1\k\i\u\p\j\d\r\7\n\l\2\r\6\2\5\l\k\l\x\i\f\k\o\k\6\m\6\w\y\l\4\j\v\h\u\o\j\l\t\u\e\7\c\b\3\z\e\x\3\2\r\b\g\h\o\c\y\u\9\w\h\2\i\s\y\c\d\q\m\8\0\s\g\9\o\4\a\1\7\8\n\z\o\y\h\c\9\9\i\a\8\x\p\c\o\k\d\0\t\b\6\e\e\4\b\v\d\g\u\o\r\l\4\5\0\6\s\x\8\e\v\0\z\s\7 ]] 00:14:34.948 13:32:47 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:14:35.527 13:32:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:14:35.527 13:32:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:14:35.527 13:32:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:35.527 13:32:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:14:35.527 [2024-05-15 13:32:48.370715] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:35.527 [2024-05-15 13:32:48.371768] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77126 ] 00:14:35.527 { 00:14:35.527 "subsystems": [ 00:14:35.527 { 00:14:35.527 "subsystem": "bdev", 00:14:35.527 "config": [ 00:14:35.527 { 00:14:35.527 "params": { 00:14:35.527 "block_size": 512, 00:14:35.527 "num_blocks": 1048576, 00:14:35.527 "name": "malloc0" 00:14:35.527 }, 00:14:35.527 "method": "bdev_malloc_create" 00:14:35.527 }, 00:14:35.527 { 00:14:35.527 "params": { 00:14:35.527 "filename": "/dev/zram1", 00:14:35.527 "name": "uring0" 00:14:35.527 }, 00:14:35.527 "method": "bdev_uring_create" 00:14:35.527 }, 00:14:35.527 { 00:14:35.527 "method": "bdev_wait_for_examine" 00:14:35.527 } 00:14:35.527 ] 00:14:35.527 } 00:14:35.527 ] 00:14:35.527 } 00:14:35.527 [2024-05-15 13:32:48.499751] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:35.527 [2024-05-15 13:32:48.513352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.527 [2024-05-15 13:32:48.585382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.048  Copying: 167/512 [MB] (167 MBps) Copying: 329/512 [MB] (161 MBps) Copying: 512/512 [MB] (average 171 MBps) 00:14:39.048 00:14:39.048 13:32:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:14:39.048 13:32:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:14:39.048 13:32:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:14:39.048 13:32:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:14:39.048 13:32:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:14:39.048 13:32:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:14:39.048 13:32:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:39.048 13:32:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:14:39.305 [2024-05-15 13:32:52.178264] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:39.305 { 00:14:39.305 "subsystems": [ 00:14:39.305 { 00:14:39.305 "subsystem": "bdev", 00:14:39.305 "config": [ 00:14:39.305 { 00:14:39.305 "params": { 00:14:39.305 "block_size": 512, 00:14:39.305 "num_blocks": 1048576, 00:14:39.305 "name": "malloc0" 00:14:39.305 }, 00:14:39.305 "method": "bdev_malloc_create" 00:14:39.305 }, 00:14:39.305 { 00:14:39.305 "params": { 00:14:39.305 "filename": "/dev/zram1", 00:14:39.305 "name": "uring0" 00:14:39.305 }, 00:14:39.305 "method": "bdev_uring_create" 00:14:39.305 }, 00:14:39.305 { 00:14:39.305 "params": { 00:14:39.305 "name": "uring0" 00:14:39.305 }, 00:14:39.305 "method": "bdev_uring_delete" 00:14:39.305 }, 00:14:39.305 { 00:14:39.305 "method": "bdev_wait_for_examine" 00:14:39.305 } 00:14:39.305 ] 00:14:39.305 } 00:14:39.305 ] 00:14:39.305 } 00:14:39.305 [2024-05-15 13:32:52.179386] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77183 ] 00:14:39.305 [2024-05-15 13:32:52.309226] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:39.305 [2024-05-15 13:32:52.328828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.305 [2024-05-15 13:32:52.384227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.562 [2024-05-15 13:32:52.601183] bdev.c:4995:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 0 00:14:40.127  Copying: 0/0 [B] (average 0 Bps) 00:14:40.127 00:14:40.127 13:32:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:14:40.127 13:32:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:14:40.127 13:32:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:14:40.127 13:32:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:14:40.127 13:32:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:40.127 13:32:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:14:40.127 13:32:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:14:40.127 13:32:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:40.127 13:32:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:40.127 13:32:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:40.127 13:32:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:40.127 13:32:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:40.127 13:32:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:40.127 13:32:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:40.127 13:32:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:40.127 13:32:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:14:40.127 { 00:14:40.127 "subsystems": [ 00:14:40.127 { 00:14:40.127 "subsystem": "bdev", 00:14:40.127 "config": [ 00:14:40.127 { 00:14:40.127 "params": { 00:14:40.127 "block_size": 512, 00:14:40.127 "num_blocks": 1048576, 00:14:40.127 "name": "malloc0" 00:14:40.127 }, 00:14:40.127 "method": "bdev_malloc_create" 00:14:40.127 }, 00:14:40.127 { 00:14:40.127 "params": { 00:14:40.127 "filename": "/dev/zram1", 00:14:40.127 "name": "uring0" 00:14:40.127 }, 00:14:40.127 "method": "bdev_uring_create" 00:14:40.127 }, 00:14:40.127 { 00:14:40.127 "params": { 00:14:40.127 "name": "uring0" 00:14:40.127 }, 00:14:40.127 "method": "bdev_uring_delete" 00:14:40.127 }, 00:14:40.127 { 00:14:40.127 "method": "bdev_wait_for_examine" 00:14:40.127 } 00:14:40.127 ] 00:14:40.127 } 00:14:40.127 ] 00:14:40.127 } 00:14:40.127 [2024-05-15 13:32:53.018312] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:40.127 [2024-05-15 13:32:53.018399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77206 ] 00:14:40.127 [2024-05-15 13:32:53.139280] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:40.127 [2024-05-15 13:32:53.154468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.384 [2024-05-15 13:32:53.228727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.384 [2024-05-15 13:32:53.441694] bdev.c:4995:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 0 00:14:40.384 [2024-05-15 13:32:53.457847] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:14:40.384 [2024-05-15 13:32:53.457902] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:14:40.384 [2024-05-15 13:32:53.457913] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:14:40.384 [2024-05-15 13:32:53.457925] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:40.641 [2024-05-15 13:32:53.715767] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:14:40.898 13:32:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:14:40.898 13:32:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:40.898 13:32:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:14:40.898 13:32:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:14:40.898 13:32:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:14:40.898 13:32:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:40.898 13:32:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:14:40.898 13:32:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:14:40.898 13:32:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:14:40.898 13:32:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:14:40.898 13:32:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:14:40.898 13:32:53 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:14:41.155 00:14:41.155 real 0m13.818s 00:14:41.155 user 0m8.122s 00:14:41.155 sys 0m12.618s 00:14:41.155 13:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:41.155 ************************************ 00:14:41.155 END TEST dd_uring_copy 00:14:41.155 ************************************ 00:14:41.155 13:32:54 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:14:41.155 00:14:41.155 real 0m13.951s 00:14:41.155 user 0m8.176s 00:14:41.155 sys 0m12.699s 00:14:41.155 13:32:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:41.155 13:32:54 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:14:41.155 ************************************ 00:14:41.155 END TEST spdk_dd_uring 00:14:41.155 ************************************ 00:14:41.155 13:32:54 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:14:41.155 13:32:54 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:41.155 13:32:54 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:41.155 13:32:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:14:41.155 ************************************ 00:14:41.155 START TEST spdk_dd_sparse 00:14:41.155 ************************************ 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:14:41.156 * Looking for test storage... 00:14:41.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:14:41.156 1+0 records in 00:14:41.156 1+0 records out 00:14:41.156 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00578849 s, 725 MB/s 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:14:41.156 1+0 records in 00:14:41.156 1+0 records out 00:14:41.156 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00663295 s, 632 MB/s 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:14:41.156 1+0 records in 00:14:41.156 1+0 records out 00:14:41.156 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00477438 s, 879 MB/s 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:41.156 13:32:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:14:41.414 ************************************ 00:14:41.414 START TEST dd_sparse_file_to_file 00:14:41.414 ************************************ 00:14:41.414 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1121 -- # file_to_file 00:14:41.414 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:14:41.414 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:14:41.414 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:14:41.414 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:14:41.414 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:14:41.414 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:14:41.414 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:14:41.414 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:14:41.414 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:14:41.414 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:14:41.414 [2024-05-15 13:32:54.294053] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:41.414 [2024-05-15 13:32:54.294718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77298 ] 00:14:41.414 { 00:14:41.414 "subsystems": [ 00:14:41.414 { 00:14:41.414 "subsystem": "bdev", 00:14:41.414 "config": [ 00:14:41.414 { 00:14:41.414 "params": { 00:14:41.414 "block_size": 4096, 00:14:41.414 "filename": "dd_sparse_aio_disk", 00:14:41.414 "name": "dd_aio" 00:14:41.414 }, 00:14:41.414 "method": "bdev_aio_create" 00:14:41.414 }, 00:14:41.414 { 00:14:41.414 "params": { 00:14:41.414 "lvs_name": "dd_lvstore", 00:14:41.414 "bdev_name": "dd_aio" 00:14:41.414 }, 00:14:41.414 "method": "bdev_lvol_create_lvstore" 00:14:41.414 }, 00:14:41.414 { 00:14:41.414 "method": "bdev_wait_for_examine" 00:14:41.414 } 00:14:41.414 ] 00:14:41.414 } 00:14:41.414 ] 00:14:41.414 } 00:14:41.414 [2024-05-15 13:32:54.415413] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:41.414 [2024-05-15 13:32:54.432703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.414 [2024-05-15 13:32:54.490181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.929  Copying: 12/36 [MB] (average 1200 MBps) 00:14:41.929 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:14:41.929 ************************************ 00:14:41.929 END TEST dd_sparse_file_to_file 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:14:41.929 00:14:41.929 real 0m0.615s 00:14:41.929 user 0m0.350s 00:14:41.929 sys 0m0.330s 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:14:41.929 ************************************ 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:14:41.929 ************************************ 00:14:41.929 START TEST dd_sparse_file_to_bdev 00:14:41.929 ************************************ 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1121 -- # file_to_bdev 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:14:41.929 13:32:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:14:41.929 { 00:14:41.929 "subsystems": [ 00:14:41.929 { 00:14:41.929 "subsystem": "bdev", 00:14:41.929 "config": [ 00:14:41.929 { 00:14:41.929 "params": { 00:14:41.929 "block_size": 4096, 00:14:41.929 "filename": "dd_sparse_aio_disk", 00:14:41.929 "name": "dd_aio" 00:14:41.929 }, 00:14:41.929 "method": "bdev_aio_create" 00:14:41.929 }, 00:14:41.929 { 00:14:41.929 "params": { 00:14:41.929 "lvs_name": "dd_lvstore", 00:14:41.929 "lvol_name": "dd_lvol", 00:14:41.929 "size_in_mib": 36, 00:14:41.929 "thin_provision": true 00:14:41.929 }, 00:14:41.929 "method": "bdev_lvol_create" 00:14:41.929 }, 00:14:41.929 { 00:14:41.929 "method": "bdev_wait_for_examine" 00:14:41.929 } 00:14:41.929 ] 00:14:41.930 } 00:14:41.930 ] 00:14:41.930 } 00:14:41.930 [2024-05-15 13:32:54.983011] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:41.930 [2024-05-15 13:32:54.983531] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77335 ] 00:14:42.187 [2024-05-15 13:32:55.116437] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:42.187 [2024-05-15 13:32:55.128803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.187 [2024-05-15 13:32:55.186472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.446  Copying: 12/36 [MB] (average 631 MBps) 00:14:42.446 00:14:42.446 00:14:42.446 real 0m0.578s 00:14:42.446 user 0m0.338s 00:14:42.446 sys 0m0.308s 00:14:42.446 ************************************ 00:14:42.446 END TEST dd_sparse_file_to_bdev 00:14:42.446 ************************************ 00:14:42.446 13:32:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:42.446 13:32:55 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:14:42.705 13:32:55 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:14:42.705 13:32:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:42.705 13:32:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:42.705 13:32:55 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:14:42.705 ************************************ 00:14:42.705 START TEST dd_sparse_bdev_to_file 00:14:42.705 ************************************ 00:14:42.705 13:32:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1121 -- # bdev_to_file 00:14:42.705 13:32:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:14:42.705 13:32:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:14:42.705 13:32:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:14:42.705 13:32:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:14:42.705 13:32:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:14:42.705 13:32:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:14:42.705 13:32:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:14:42.705 13:32:55 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:14:42.705 [2024-05-15 13:32:55.607815] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:42.705 [2024-05-15 13:32:55.607911] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77373 ] 00:14:42.705 { 00:14:42.705 "subsystems": [ 00:14:42.705 { 00:14:42.705 "subsystem": "bdev", 00:14:42.705 "config": [ 00:14:42.705 { 00:14:42.705 "params": { 00:14:42.705 "block_size": 4096, 00:14:42.705 "filename": "dd_sparse_aio_disk", 00:14:42.705 "name": "dd_aio" 00:14:42.705 }, 00:14:42.705 "method": "bdev_aio_create" 00:14:42.705 }, 00:14:42.705 { 00:14:42.705 "method": "bdev_wait_for_examine" 00:14:42.705 } 00:14:42.705 ] 00:14:42.706 } 00:14:42.706 ] 00:14:42.706 } 00:14:42.706 [2024-05-15 13:32:55.731980] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:42.706 [2024-05-15 13:32:55.751026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.706 [2024-05-15 13:32:55.801139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.238  Copying: 12/36 [MB] (average 1000 MBps) 00:14:43.238 00:14:43.238 13:32:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:14:43.238 13:32:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:14:43.238 13:32:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:14:43.238 13:32:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:14:43.239 13:32:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:14:43.239 13:32:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:14:43.239 13:32:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:14:43.239 13:32:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:14:43.239 13:32:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:14:43.239 13:32:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:14:43.239 00:14:43.239 real 0m0.569s 00:14:43.239 user 0m0.336s 00:14:43.239 sys 0m0.301s 00:14:43.239 13:32:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:43.239 ************************************ 00:14:43.239 END TEST dd_sparse_bdev_to_file 00:14:43.239 ************************************ 00:14:43.239 13:32:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:14:43.239 13:32:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:14:43.239 13:32:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:14:43.239 13:32:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:14:43.239 13:32:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:14:43.239 13:32:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:14:43.239 00:14:43.239 real 0m2.060s 00:14:43.239 user 0m1.123s 00:14:43.239 sys 0m1.137s 00:14:43.239 13:32:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:43.239 ************************************ 00:14:43.239 END TEST spdk_dd_sparse 00:14:43.239 ************************************ 00:14:43.239 13:32:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:14:43.239 13:32:56 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:14:43.239 13:32:56 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:43.239 13:32:56 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:43.239 13:32:56 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:14:43.239 ************************************ 00:14:43.239 START TEST spdk_dd_negative 00:14:43.239 ************************************ 00:14:43.239 13:32:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:14:43.239 * Looking for test storage... 00:14:43.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:14:43.239 13:32:56 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:43.239 13:32:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.497 13:32:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.497 13:32:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.497 13:32:56 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.497 13:32:56 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:14:43.498 ************************************ 00:14:43.498 START TEST dd_invalid_arguments 00:14:43.498 ************************************ 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1121 -- # invalid_arguments 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:14:43.498 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:14:43.498 00:14:43.498 CPU options: 00:14:43.498 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:14:43.498 (like [0,1,10]) 00:14:43.498 --lcores lcore to CPU mapping list. The list is in the format: 00:14:43.498 [<,lcores[@CPUs]>...] 00:14:43.498 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:14:43.498 Within the group, '-' is used for range separator, 00:14:43.498 ',' is used for single number separator. 00:14:43.498 '( )' can be omitted for single element group, 00:14:43.498 '@' can be omitted if cpus and lcores have the same value 00:14:43.498 --disable-cpumask-locks Disable CPU core lock files. 00:14:43.498 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:14:43.498 pollers in the app support interrupt mode) 00:14:43.498 -p, --main-core main (primary) core for DPDK 00:14:43.498 00:14:43.498 Configuration options: 00:14:43.498 -c, --config, --json JSON config file 00:14:43.498 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:14:43.498 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:14:43.498 --wait-for-rpc wait for RPCs to initialize subsystems 00:14:43.498 --rpcs-allowed comma-separated list of permitted RPCS 00:14:43.498 --json-ignore-init-errors don't exit on invalid config entry 00:14:43.498 00:14:43.498 Memory options: 00:14:43.498 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:14:43.498 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:14:43.498 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:14:43.498 -R, --huge-unlink unlink huge files after initialization 00:14:43.498 -n, --mem-channels number of memory channels used for DPDK 00:14:43.498 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:14:43.498 --msg-mempool-size global message memory pool size in count (default: 262143) 00:14:43.498 --no-huge run without using hugepages 00:14:43.498 -i, --shm-id shared memory ID (optional) 00:14:43.498 -g, --single-file-segments force creating just one hugetlbfs file 00:14:43.498 00:14:43.498 PCI options: 00:14:43.498 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:14:43.498 -B, --pci-blocked pci addr to block (can be used more than once) 00:14:43.498 -u, --no-pci disable PCI access 00:14:43.498 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:14:43.498 00:14:43.498 Log options: 00:14:43.498 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:14:43.498 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:14:43.498 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:14:43.498 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:14:43.498 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:14:43.498 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:14:43.498 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:14:43.498 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:14:43.498 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:14:43.498 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:14:43.498 virtio_vfio_user, vmd) 00:14:43.498 --silence-noticelog disable notice level logging to stderr 00:14:43.498 00:14:43.498 Trace options: 00:14:43.498 --num-trace-entries number of trace entries for each core, must be power of 2, 00:14:43.498 setting 0 to disable trace (default 32768) 00:14:43.498 Tracepoints vary in size and can use more than one trace entry. 00:14:43.498 -e, --tpoint-group [:] 00:14:43.498 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:14:43.498 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:14:43.498 [2024-05-15 13:32:56.408514] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:14:43.498 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:14:43.498 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:14:43.498 a tracepoint group. First tpoint inside a group can be enabled by 00:14:43.498 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:14:43.498 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:14:43.498 in /include/spdk_internal/trace_defs.h 00:14:43.498 00:14:43.498 Other options: 00:14:43.498 -h, --help show this usage 00:14:43.498 -v, --version print SPDK version 00:14:43.498 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:14:43.498 --env-context Opaque context for use of the env implementation 00:14:43.498 00:14:43.498 Application specific: 00:14:43.498 [--------- DD Options ---------] 00:14:43.498 --if Input file. Must specify either --if or --ib. 00:14:43.498 --ib Input bdev. Must specifier either --if or --ib 00:14:43.498 --of Output file. Must specify either --of or --ob. 00:14:43.498 --ob Output bdev. Must specify either --of or --ob. 00:14:43.498 --iflag Input file flags. 00:14:43.498 --oflag Output file flags. 00:14:43.498 --bs I/O unit size (default: 4096) 00:14:43.498 --qd Queue depth (default: 2) 00:14:43.498 --count I/O unit count. The number of I/O units to copy. (default: all) 00:14:43.498 --skip Skip this many I/O units at start of input. (default: 0) 00:14:43.498 --seek Skip this many I/O units at start of output. (default: 0) 00:14:43.498 --aio Force usage of AIO. (by default io_uring is used if available) 00:14:43.498 --sparse Enable hole skipping in input target 00:14:43.498 Available iflag and oflag values: 00:14:43.498 append - append mode 00:14:43.498 direct - use direct I/O for data 00:14:43.498 directory - fail unless a directory 00:14:43.498 dsync - use synchronized I/O for data 00:14:43.498 noatime - do not update access time 00:14:43.498 noctty - do not assign controlling terminal from file 00:14:43.498 nofollow - do not follow symlinks 00:14:43.498 nonblock - use non-blocking I/O 00:14:43.498 sync - use synchronized I/O for data and metadata 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:43.498 00:14:43.498 real 0m0.073s 00:14:43.498 user 0m0.041s 00:14:43.498 sys 0m0.031s 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:43.498 ************************************ 00:14:43.498 END TEST dd_invalid_arguments 00:14:43.498 ************************************ 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:43.498 13:32:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:43.499 13:32:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:14:43.499 ************************************ 00:14:43.499 START TEST dd_double_input 00:14:43.499 ************************************ 00:14:43.499 13:32:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1121 -- # double_input 00:14:43.499 13:32:56 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:14:43.499 13:32:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:14:43.499 13:32:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:14:43.499 13:32:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:43.499 13:32:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:43.499 13:32:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:43.499 13:32:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:43.499 13:32:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:43.499 13:32:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:43.499 13:32:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:43.499 13:32:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:43.499 13:32:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:14:43.499 [2024-05-15 13:32:56.526637] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:14:43.499 13:32:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:14:43.499 13:32:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:43.499 13:32:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:43.499 13:32:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:43.499 00:14:43.499 real 0m0.060s 00:14:43.499 user 0m0.028s 00:14:43.499 sys 0m0.031s 00:14:43.499 13:32:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:43.499 ************************************ 00:14:43.499 END TEST dd_double_input 00:14:43.499 ************************************ 00:14:43.499 13:32:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:14:43.499 13:32:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:14:43.499 13:32:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:43.499 13:32:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:43.499 13:32:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:14:43.756 ************************************ 00:14:43.756 START TEST dd_double_output 00:14:43.756 ************************************ 00:14:43.756 13:32:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1121 -- # double_output 00:14:43.756 13:32:56 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:14:43.756 13:32:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:14:43.756 13:32:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:14:43.756 13:32:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:43.756 13:32:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:43.756 13:32:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:43.756 13:32:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:43.756 13:32:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:43.756 13:32:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:43.756 13:32:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:43.756 13:32:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:43.756 13:32:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:14:43.756 [2024-05-15 13:32:56.642941] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:14:43.756 13:32:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:14:43.756 13:32:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:43.756 13:32:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:43.756 13:32:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:43.756 00:14:43.756 real 0m0.057s 00:14:43.756 user 0m0.036s 00:14:43.756 sys 0m0.020s 00:14:43.756 13:32:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:43.756 13:32:56 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:14:43.757 ************************************ 00:14:43.757 END TEST dd_double_output 00:14:43.757 ************************************ 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:14:43.757 ************************************ 00:14:43.757 START TEST dd_no_input 00:14:43.757 ************************************ 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1121 -- # no_input 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:14:43.757 [2024-05-15 13:32:56.779505] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:43.757 ************************************ 00:14:43.757 END TEST dd_no_input 00:14:43.757 ************************************ 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:43.757 00:14:43.757 real 0m0.085s 00:14:43.757 user 0m0.051s 00:14:43.757 sys 0m0.033s 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:43.757 13:32:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:14:44.014 ************************************ 00:14:44.014 START TEST dd_no_output 00:14:44.014 ************************************ 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1121 -- # no_output 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:14:44.014 [2024-05-15 13:32:56.912648] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:44.014 00:14:44.014 real 0m0.080s 00:14:44.014 user 0m0.040s 00:14:44.014 sys 0m0.038s 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:14:44.014 ************************************ 00:14:44.014 END TEST dd_no_output 00:14:44.014 ************************************ 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:14:44.014 ************************************ 00:14:44.014 START TEST dd_wrong_blocksize 00:14:44.014 ************************************ 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1121 -- # wrong_blocksize 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:14:44.014 13:32:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:14:44.015 13:32:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:14:44.015 13:32:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:44.015 13:32:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.015 13:32:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:44.015 13:32:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.015 13:32:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:44.015 13:32:56 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.015 13:32:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:44.015 13:32:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:44.015 13:32:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:14:44.015 [2024-05-15 13:32:57.040548] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:14:44.015 13:32:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:14:44.015 13:32:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:44.015 13:32:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:44.015 13:32:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:44.015 00:14:44.015 real 0m0.058s 00:14:44.015 user 0m0.036s 00:14:44.015 sys 0m0.021s 00:14:44.015 13:32:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:44.015 ************************************ 00:14:44.015 END TEST dd_wrong_blocksize 00:14:44.015 ************************************ 00:14:44.015 13:32:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:14:44.015 13:32:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:14:44.015 13:32:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:44.015 13:32:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:44.015 13:32:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:14:44.272 ************************************ 00:14:44.272 START TEST dd_smaller_blocksize 00:14:44.272 ************************************ 00:14:44.272 13:32:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1121 -- # smaller_blocksize 00:14:44.272 13:32:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:14:44.272 13:32:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:14:44.272 13:32:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:14:44.272 13:32:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:44.272 13:32:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.272 13:32:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:44.272 13:32:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.272 13:32:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:44.272 13:32:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.272 13:32:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:44.272 13:32:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:44.272 13:32:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:14:44.272 [2024-05-15 13:32:57.171439] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:44.272 [2024-05-15 13:32:57.171549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77586 ] 00:14:44.272 [2024-05-15 13:32:57.298453] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:44.272 [2024-05-15 13:32:57.316229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.529 [2024-05-15 13:32:57.371116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.529 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:14:44.529 [2024-05-15 13:32:57.438466] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:14:44.529 [2024-05-15 13:32:57.438496] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:44.529 [2024-05-15 13:32:57.532066] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:14:44.529 13:32:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:14:44.529 13:32:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:44.529 13:32:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:14:44.529 13:32:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:14:44.529 13:32:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:14:44.530 13:32:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:44.530 ************************************ 00:14:44.530 END TEST dd_smaller_blocksize 00:14:44.530 ************************************ 00:14:44.530 00:14:44.530 real 0m0.503s 00:14:44.530 user 0m0.250s 00:14:44.530 sys 0m0.147s 00:14:44.530 13:32:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:44.530 13:32:57 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:14:44.788 ************************************ 00:14:44.788 START TEST dd_invalid_count 00:14:44.788 ************************************ 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1121 -- # invalid_count 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:14:44.788 [2024-05-15 13:32:57.726483] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:44.788 00:14:44.788 real 0m0.081s 00:14:44.788 user 0m0.047s 00:14:44.788 sys 0m0.032s 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:14:44.788 ************************************ 00:14:44.788 END TEST dd_invalid_count 00:14:44.788 ************************************ 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:14:44.788 ************************************ 00:14:44.788 START TEST dd_invalid_oflag 00:14:44.788 ************************************ 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1121 -- # invalid_oflag 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:44.788 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:14:44.788 [2024-05-15 13:32:57.868626] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:14:45.046 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:14:45.046 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:45.046 ************************************ 00:14:45.046 END TEST dd_invalid_oflag 00:14:45.046 ************************************ 00:14:45.046 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:45.046 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:45.046 00:14:45.046 real 0m0.074s 00:14:45.046 user 0m0.039s 00:14:45.046 sys 0m0.033s 00:14:45.046 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:45.046 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:14:45.046 13:32:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:14:45.046 13:32:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:45.046 13:32:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:45.046 13:32:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:14:45.046 ************************************ 00:14:45.046 START TEST dd_invalid_iflag 00:14:45.046 ************************************ 00:14:45.046 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1121 -- # invalid_iflag 00:14:45.046 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:14:45.046 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:14:45.046 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:14:45.046 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:45.046 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.046 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:45.046 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.046 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:45.046 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.046 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:45.046 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:45.046 13:32:57 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:14:45.046 [2024-05-15 13:32:57.998080] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:14:45.046 13:32:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:14:45.046 13:32:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:45.046 13:32:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:45.046 13:32:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:45.046 00:14:45.046 real 0m0.064s 00:14:45.046 user 0m0.034s 00:14:45.046 sys 0m0.029s 00:14:45.046 13:32:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:45.046 13:32:58 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:14:45.046 ************************************ 00:14:45.046 END TEST dd_invalid_iflag 00:14:45.046 ************************************ 00:14:45.046 13:32:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:14:45.046 13:32:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:45.046 13:32:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:45.046 13:32:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:14:45.046 ************************************ 00:14:45.046 START TEST dd_unknown_flag 00:14:45.046 ************************************ 00:14:45.046 13:32:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1121 -- # unknown_flag 00:14:45.046 13:32:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:14:45.046 13:32:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:14:45.046 13:32:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:14:45.046 13:32:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:45.046 13:32:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.046 13:32:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:45.046 13:32:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.046 13:32:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:45.046 13:32:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.046 13:32:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:45.046 13:32:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:45.046 13:32:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:14:45.046 [2024-05-15 13:32:58.126813] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:45.046 [2024-05-15 13:32:58.126922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77678 ] 00:14:45.304 [2024-05-15 13:32:58.255229] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:45.304 [2024-05-15 13:32:58.277536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.304 [2024-05-15 13:32:58.338958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.561 [2024-05-15 13:32:58.412812] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:14:45.561 [2024-05-15 13:32:58.413155] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:45.561 [2024-05-15 13:32:58.413333] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:14:45.561 [2024-05-15 13:32:58.413447] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:45.561 [2024-05-15 13:32:58.413849] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:14:45.561 [2024-05-15 13:32:58.414005] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:45.561 [2024-05-15 13:32:58.414121] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:14:45.561 [2024-05-15 13:32:58.414256] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:14:45.561 [2024-05-15 13:32:58.515196] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:14:45.561 ************************************ 00:14:45.561 END TEST dd_unknown_flag 00:14:45.561 ************************************ 00:14:45.561 13:32:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:14:45.561 13:32:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:45.561 13:32:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:14:45.561 13:32:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:14:45.561 13:32:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:14:45.561 13:32:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:45.561 00:14:45.561 real 0m0.531s 00:14:45.562 user 0m0.279s 00:14:45.562 sys 0m0.150s 00:14:45.562 13:32:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:45.562 13:32:58 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:14:45.562 13:32:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:14:45.562 13:32:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:45.562 13:32:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:45.562 13:32:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:14:45.562 ************************************ 00:14:45.562 START TEST dd_invalid_json 00:14:45.562 ************************************ 00:14:45.562 13:32:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1121 -- # invalid_json 00:14:45.562 13:32:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:14:45.819 13:32:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:14:45.819 13:32:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:14:45.819 13:32:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:14:45.819 13:32:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:45.819 13:32:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.819 13:32:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:45.819 13:32:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.819 13:32:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:45.819 13:32:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:45.819 13:32:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:14:45.819 13:32:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:14:45.819 13:32:58 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:14:45.819 [2024-05-15 13:32:58.717372] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:45.819 [2024-05-15 13:32:58.717742] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77712 ] 00:14:45.819 [2024-05-15 13:32:58.844832] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:45.819 [2024-05-15 13:32:58.861714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.077 [2024-05-15 13:32:58.921322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.077 [2024-05-15 13:32:58.921625] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:14:46.077 [2024-05-15 13:32:58.921735] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:46.077 [2024-05-15 13:32:58.921849] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:46.077 [2024-05-15 13:32:58.921927] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:14:46.077 13:32:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:14:46.077 13:32:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:46.077 13:32:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:14:46.077 13:32:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:14:46.077 13:32:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:14:46.077 13:32:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:46.077 00:14:46.077 real 0m0.357s 00:14:46.077 user 0m0.166s 00:14:46.077 sys 0m0.085s 00:14:46.077 13:32:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:46.077 13:32:59 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:14:46.077 ************************************ 00:14:46.077 END TEST dd_invalid_json 00:14:46.077 ************************************ 00:14:46.077 ************************************ 00:14:46.077 END TEST spdk_dd_negative 00:14:46.077 ************************************ 00:14:46.077 00:14:46.077 real 0m2.824s 00:14:46.077 user 0m1.322s 00:14:46.077 sys 0m1.169s 00:14:46.078 13:32:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:46.078 13:32:59 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:14:46.078 ************************************ 00:14:46.078 END TEST spdk_dd 00:14:46.078 ************************************ 00:14:46.078 00:14:46.078 real 1m10.421s 00:14:46.078 user 0m42.354s 00:14:46.078 sys 0m32.359s 00:14:46.078 13:32:59 spdk_dd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:46.078 13:32:59 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:14:46.078 13:32:59 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:14:46.078 13:32:59 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:14:46.078 13:32:59 -- spdk/autotest.sh@256 -- # timing_exit lib 00:14:46.078 13:32:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:46.078 13:32:59 -- common/autotest_common.sh@10 -- # set +x 00:14:46.334 13:32:59 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:14:46.334 13:32:59 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:14:46.334 13:32:59 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:14:46.334 13:32:59 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:14:46.334 13:32:59 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:14:46.334 13:32:59 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:14:46.335 13:32:59 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:14:46.335 13:32:59 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:46.335 13:32:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:46.335 13:32:59 -- common/autotest_common.sh@10 -- # set +x 00:14:46.335 ************************************ 00:14:46.335 START TEST nvmf_tcp 00:14:46.335 ************************************ 00:14:46.335 13:32:59 nvmf_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:14:46.335 * Looking for test storage... 00:14:46.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:46.335 13:32:59 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.335 13:32:59 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.335 13:32:59 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.335 13:32:59 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.335 13:32:59 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.335 13:32:59 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.335 13:32:59 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:14:46.335 13:32:59 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:14:46.335 13:32:59 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:46.335 13:32:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:14:46.335 13:32:59 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:46.335 13:32:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:46.335 13:32:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:46.335 13:32:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:46.335 ************************************ 00:14:46.335 START TEST nvmf_host_management 00:14:46.335 ************************************ 00:14:46.335 13:32:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:46.592 * Looking for test storage... 00:14:46.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:46.592 Cannot find device "nvmf_init_br" 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:46.592 Cannot find device "nvmf_tgt_br" 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:46.592 Cannot find device "nvmf_tgt_br2" 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:46.592 Cannot find device "nvmf_init_br" 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:46.592 Cannot find device "nvmf_tgt_br" 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:46.592 Cannot find device "nvmf_tgt_br2" 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:46.592 Cannot find device "nvmf_br" 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:46.592 Cannot find device "nvmf_init_if" 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:46.592 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:46.592 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:46.592 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:46.593 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:46.849 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:46.849 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:46.849 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:46.849 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:46.849 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:46.849 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:46.849 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:46.849 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:46.849 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:46.849 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:46.849 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:46.849 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:46.849 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:46.849 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:46.849 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:46.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:14:46.849 00:14:46.849 --- 10.0.0.2 ping statistics --- 00:14:46.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.849 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:14:46.849 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:46.849 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:46.849 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:14:46.849 00:14:46.849 --- 10.0.0.3 ping statistics --- 00:14:46.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.849 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:14:46.849 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:46.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:14:46.849 00:14:46.849 --- 10.0.0.1 ping statistics --- 00:14:46.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.849 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:14:46.849 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.849 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:14:46.849 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:46.850 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.850 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:46.850 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:46.850 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.850 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:46.850 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:46.850 13:32:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:46.850 13:32:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:46.850 13:32:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:46.850 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:46.850 13:32:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:46.850 13:32:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:46.850 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=77963 00:14:46.850 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:46.850 13:32:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 77963 00:14:46.850 13:32:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 77963 ']' 00:14:46.850 13:32:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.850 13:32:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:46.850 13:32:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.850 13:32:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:46.850 13:32:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:47.108 [2024-05-15 13:32:59.981630] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:47.108 [2024-05-15 13:32:59.982100] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.108 [2024-05-15 13:33:00.123541] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:47.108 [2024-05-15 13:33:00.141499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:47.108 [2024-05-15 13:33:00.198691] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.108 [2024-05-15 13:33:00.198963] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.108 [2024-05-15 13:33:00.199102] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.108 [2024-05-15 13:33:00.199224] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.108 [2024-05-15 13:33:00.199395] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.108 [2024-05-15 13:33:00.199596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.108 [2024-05-15 13:33:00.199770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.108 [2024-05-15 13:33:00.199774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:47.108 [2024-05-15 13:33:00.199666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:48.042 13:33:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:48.042 13:33:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:14:48.042 13:33:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:48.042 13:33:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:48.042 13:33:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:48.042 13:33:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.042 13:33:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:48.042 13:33:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.042 13:33:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:48.042 [2024-05-15 13:33:00.934090] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.042 13:33:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.042 13:33:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:48.042 13:33:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:48.042 13:33:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:48.042 13:33:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:48.042 13:33:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:48.042 13:33:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:48.042 13:33:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.042 13:33:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:48.042 Malloc0 00:14:48.042 [2024-05-15 13:33:01.009813] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:48.042 [2024-05-15 13:33:01.010624] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.042 13:33:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.043 13:33:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:48.043 13:33:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:48.043 13:33:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:48.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:48.043 13:33:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=78023 00:14:48.043 13:33:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 78023 /var/tmp/bdevperf.sock 00:14:48.043 13:33:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 78023 ']' 00:14:48.043 13:33:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:48.043 13:33:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:48.043 13:33:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:48.043 13:33:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:48.043 13:33:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:48.043 13:33:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:48.043 13:33:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:48.043 13:33:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:48.043 13:33:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:48.043 13:33:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:48.043 13:33:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:48.043 { 00:14:48.043 "params": { 00:14:48.043 "name": "Nvme$subsystem", 00:14:48.043 "trtype": "$TEST_TRANSPORT", 00:14:48.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:48.043 "adrfam": "ipv4", 00:14:48.043 "trsvcid": "$NVMF_PORT", 00:14:48.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:48.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:48.043 "hdgst": ${hdgst:-false}, 00:14:48.043 "ddgst": ${ddgst:-false} 00:14:48.043 }, 00:14:48.043 "method": "bdev_nvme_attach_controller" 00:14:48.043 } 00:14:48.043 EOF 00:14:48.043 )") 00:14:48.043 13:33:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:48.043 13:33:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:48.043 13:33:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:48.043 13:33:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:48.043 "params": { 00:14:48.043 "name": "Nvme0", 00:14:48.043 "trtype": "tcp", 00:14:48.043 "traddr": "10.0.0.2", 00:14:48.043 "adrfam": "ipv4", 00:14:48.043 "trsvcid": "4420", 00:14:48.043 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:48.043 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:48.043 "hdgst": false, 00:14:48.043 "ddgst": false 00:14:48.043 }, 00:14:48.043 "method": "bdev_nvme_attach_controller" 00:14:48.043 }' 00:14:48.043 [2024-05-15 13:33:01.113197] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:48.043 [2024-05-15 13:33:01.113617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78023 ] 00:14:48.302 [2024-05-15 13:33:01.248089] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:48.302 [2024-05-15 13:33:01.268090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.302 [2024-05-15 13:33:01.326467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.560 Running I/O for 10 seconds... 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1027 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1027 -ge 100 ']' 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:49.127 [2024-05-15 13:33:02.178832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.179146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.179355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.179526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.179694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.179846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.180012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.180181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.180347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.180454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.180554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.180659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.180724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.180829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.180890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 13:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.127 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.181032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c 13:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:49.127 13:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.127 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.181211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.181225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.181251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.181263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.181276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.181286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.181300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.181310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.181323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.181334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.181346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.181357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.181370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.181380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.181393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.181404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.181416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.181427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.181440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.181451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.181464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.181474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.181487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.181497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.181510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.181521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.181534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.181544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.181557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.181568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.181580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.181591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.181603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.181614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.181630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.127 [2024-05-15 13:33:02.181641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.127 [2024-05-15 13:33:02.181653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.181664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.181677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.181688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.181700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.181711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.181724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.181734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.181747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.181758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.181770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.181793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.181806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.181817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.181830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.181840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.181854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.181865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.181877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.181888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.181900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.181911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.181923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.181934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.181946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.181957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.181970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.181981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.181993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.182004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.182030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.182041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.182054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.182064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.182077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.182088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.182100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.182111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.182123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.182134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.182147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.182158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.182171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.182181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.182194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.182204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.182218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.182229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.182251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.182263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.182275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.182286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.182299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.182310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.182323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.182333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.182346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.182357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.182370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.182381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.182394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.182404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.182427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.182438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.182451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.182462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.182474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.182485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.182498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.182509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.182521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.182532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.182544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.182555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.182567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.128 [2024-05-15 13:33:02.182578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.128 [2024-05-15 13:33:02.182590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa5dfe0 is same with the state(5) to be set 00:14:49.128 [2024-05-15 13:33:02.182667] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa5dfe0 was disconnected and freed. reset controller. 00:14:49.128 13:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:49.128 [2024-05-15 13:33:02.183847] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:49.128 task offset: 16256 on job bdev=Nvme0n1 fails 00:14:49.128 00:14:49.128 Latency(us) 00:14:49.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.128 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:49.128 Job: Nvme0n1 ended in about 0.69 seconds with error 00:14:49.129 Verification LBA range: start 0x0 length 0x400 00:14:49.129 Nvme0n1 : 0.69 1582.00 98.88 93.06 0.00 37163.94 8301.23 41943.04 00:14:49.129 =================================================================================================================== 00:14:49.129 Total : 1582.00 98.88 93.06 0.00 37163.94 8301.23 41943.04 00:14:49.129 [2024-05-15 13:33:02.187028] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:49.129 [2024-05-15 13:33:02.187255] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c3b10 (9): Bad file descriptor 00:14:49.129 13:33:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.129 13:33:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:49.129 [2024-05-15 13:33:02.193326] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:50.498 13:33:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 78023 00:14:50.498 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (78023) - No such process 00:14:50.498 13:33:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:50.498 13:33:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:50.498 13:33:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:50.498 13:33:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:50.498 13:33:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:50.498 13:33:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:50.498 13:33:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:50.498 13:33:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:50.498 { 00:14:50.498 "params": { 00:14:50.498 "name": "Nvme$subsystem", 00:14:50.498 "trtype": "$TEST_TRANSPORT", 00:14:50.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:50.498 "adrfam": "ipv4", 00:14:50.498 "trsvcid": "$NVMF_PORT", 00:14:50.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:50.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:50.498 "hdgst": ${hdgst:-false}, 00:14:50.498 "ddgst": ${ddgst:-false} 00:14:50.498 }, 00:14:50.498 "method": "bdev_nvme_attach_controller" 00:14:50.498 } 00:14:50.498 EOF 00:14:50.498 )") 00:14:50.498 13:33:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:50.498 13:33:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:50.498 13:33:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:50.498 13:33:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:50.498 "params": { 00:14:50.498 "name": "Nvme0", 00:14:50.498 "trtype": "tcp", 00:14:50.498 "traddr": "10.0.0.2", 00:14:50.498 "adrfam": "ipv4", 00:14:50.498 "trsvcid": "4420", 00:14:50.498 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:50.498 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:50.498 "hdgst": false, 00:14:50.498 "ddgst": false 00:14:50.498 }, 00:14:50.498 "method": "bdev_nvme_attach_controller" 00:14:50.498 }' 00:14:50.498 [2024-05-15 13:33:03.261630] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:50.498 [2024-05-15 13:33:03.262070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78061 ] 00:14:50.498 [2024-05-15 13:33:03.393687] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:50.498 [2024-05-15 13:33:03.412968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.499 [2024-05-15 13:33:03.474104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.756 Running I/O for 1 seconds... 00:14:51.743 00:14:51.743 Latency(us) 00:14:51.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.743 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:51.743 Verification LBA range: start 0x0 length 0x400 00:14:51.743 Nvme0n1 : 1.01 1584.65 99.04 0.00 0.00 39524.19 5648.58 42692.02 00:14:51.743 =================================================================================================================== 00:14:51.743 Total : 1584.65 99.04 0.00 0.00 39524.19 5648.58 42692.02 00:14:51.999 13:33:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:51.999 13:33:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:51.999 13:33:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:14:51.999 13:33:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:51.999 13:33:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:51.999 13:33:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:51.999 13:33:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:51.999 13:33:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:51.999 13:33:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:51.999 13:33:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:51.999 13:33:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:51.999 rmmod nvme_tcp 00:14:51.999 rmmod nvme_fabrics 00:14:51.999 13:33:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:51.999 13:33:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:51.999 13:33:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:51.999 13:33:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 77963 ']' 00:14:51.999 13:33:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 77963 00:14:51.999 13:33:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 77963 ']' 00:14:51.999 13:33:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 77963 00:14:51.999 13:33:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:14:51.999 13:33:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:51.999 13:33:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77963 00:14:51.999 13:33:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:51.999 13:33:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:51.999 13:33:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77963' 00:14:51.999 killing process with pid 77963 00:14:51.999 13:33:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 77963 00:14:51.999 [2024-05-15 13:33:05.003128] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]addres 13:33:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 77963 00:14:51.999 s.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:52.255 [2024-05-15 13:33:05.193192] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:52.255 13:33:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:52.255 13:33:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:52.256 13:33:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:52.256 13:33:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:52.256 13:33:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:52.256 13:33:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.256 13:33:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.256 13:33:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.256 13:33:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:52.256 13:33:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:52.256 ************************************ 00:14:52.256 END TEST nvmf_host_management 00:14:52.256 ************************************ 00:14:52.256 00:14:52.256 real 0m5.923s 00:14:52.256 user 0m22.346s 00:14:52.256 sys 0m1.754s 00:14:52.256 13:33:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:52.256 13:33:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:52.256 13:33:05 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:52.256 13:33:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:52.256 13:33:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:52.256 13:33:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:52.256 ************************************ 00:14:52.256 START TEST nvmf_lvol 00:14:52.256 ************************************ 00:14:52.256 13:33:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:52.513 * Looking for test storage... 00:14:52.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:52.513 13:33:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:52.514 Cannot find device "nvmf_tgt_br" 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:52.514 Cannot find device "nvmf_tgt_br2" 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:52.514 Cannot find device "nvmf_tgt_br" 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:52.514 Cannot find device "nvmf_tgt_br2" 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:52.514 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:52.514 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:52.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:52.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:14:52.773 00:14:52.773 --- 10.0.0.2 ping statistics --- 00:14:52.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.773 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:52.773 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:52.773 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:14:52.773 00:14:52.773 --- 10.0.0.3 ping statistics --- 00:14:52.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.773 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:52.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:14:52.773 00:14:52.773 --- 10.0.0.1 ping statistics --- 00:14:52.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.773 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=78274 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 78274 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 78274 ']' 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.773 13:33:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:52.774 13:33:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:52.774 [2024-05-15 13:33:05.858230] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:52.774 [2024-05-15 13:33:05.858474] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.038 [2024-05-15 13:33:05.983460] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:53.038 [2024-05-15 13:33:06.002217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:53.038 [2024-05-15 13:33:06.056976] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.038 [2024-05-15 13:33:06.057288] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.038 [2024-05-15 13:33:06.057483] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.038 [2024-05-15 13:33:06.057631] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.038 [2024-05-15 13:33:06.057837] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.038 [2024-05-15 13:33:06.058090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.038 [2024-05-15 13:33:06.058167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.038 [2024-05-15 13:33:06.058175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.970 13:33:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:53.970 13:33:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:14:53.970 13:33:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:53.970 13:33:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:53.970 13:33:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:53.970 13:33:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.970 13:33:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:54.228 [2024-05-15 13:33:07.223417] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.228 13:33:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:54.489 13:33:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:54.489 13:33:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:55.057 13:33:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:55.057 13:33:07 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:55.315 13:33:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:55.573 13:33:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0784bd2e-e0fb-4d38-ae5e-46ecc8f08af3 00:14:55.573 13:33:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0784bd2e-e0fb-4d38-ae5e-46ecc8f08af3 lvol 20 00:14:55.833 13:33:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=eafaa033-6d77-453e-9ca8-05d2ad8e6b6e 00:14:55.833 13:33:08 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:56.090 13:33:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 eafaa033-6d77-453e-9ca8-05d2ad8e6b6e 00:14:56.349 13:33:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:56.607 [2024-05-15 13:33:09.591341] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:56.607 [2024-05-15 13:33:09.591940] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.607 13:33:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:56.866 13:33:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=78355 00:14:56.866 13:33:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:56.866 13:33:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:58.241 13:33:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot eafaa033-6d77-453e-9ca8-05d2ad8e6b6e MY_SNAPSHOT 00:14:58.241 13:33:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=45ef1772-145a-45a6-bb00-154ef89588c2 00:14:58.241 13:33:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize eafaa033-6d77-453e-9ca8-05d2ad8e6b6e 30 00:14:58.500 13:33:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 45ef1772-145a-45a6-bb00-154ef89588c2 MY_CLONE 00:14:59.066 13:33:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1ed8fb1e-6f04-4cdd-a85f-ab203a858dd6 00:14:59.066 13:33:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 1ed8fb1e-6f04-4cdd-a85f-ab203a858dd6 00:14:59.324 13:33:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 78355 00:15:07.436 Initializing NVMe Controllers 00:15:07.436 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:07.436 Controller IO queue size 128, less than required. 00:15:07.436 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:07.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:07.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:07.436 Initialization complete. Launching workers. 00:15:07.436 ======================================================== 00:15:07.436 Latency(us) 00:15:07.436 Device Information : IOPS MiB/s Average min max 00:15:07.436 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9581.30 37.43 13371.27 5498.50 77980.83 00:15:07.436 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9522.70 37.20 13457.18 1869.79 80859.88 00:15:07.436 ======================================================== 00:15:07.436 Total : 19103.99 74.62 13414.09 1869.79 80859.88 00:15:07.436 00:15:07.436 13:33:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:07.436 13:33:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete eafaa033-6d77-453e-9ca8-05d2ad8e6b6e 00:15:08.004 13:33:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0784bd2e-e0fb-4d38-ae5e-46ecc8f08af3 00:15:08.004 13:33:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:08.004 13:33:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:08.004 13:33:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:08.004 13:33:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:08.004 13:33:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:15:08.004 13:33:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:08.004 13:33:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:15:08.004 13:33:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:08.004 13:33:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:08.004 rmmod nvme_tcp 00:15:08.262 rmmod nvme_fabrics 00:15:08.262 13:33:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:08.262 13:33:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:15:08.262 13:33:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:15:08.262 13:33:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 78274 ']' 00:15:08.262 13:33:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 78274 00:15:08.262 13:33:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 78274 ']' 00:15:08.262 13:33:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 78274 00:15:08.262 13:33:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:15:08.262 13:33:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:08.262 13:33:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78274 00:15:08.262 killing process with pid 78274 00:15:08.262 13:33:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:08.262 13:33:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:08.262 13:33:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78274' 00:15:08.262 13:33:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 78274 00:15:08.262 [2024-05-15 13:33:21.165233] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:08.262 13:33:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 78274 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:08.522 ************************************ 00:15:08.522 END TEST nvmf_lvol 00:15:08.522 ************************************ 00:15:08.522 00:15:08.522 real 0m16.136s 00:15:08.522 user 1m5.012s 00:15:08.522 sys 0m6.310s 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:08.522 13:33:21 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:08.522 13:33:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:08.522 13:33:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:08.522 13:33:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:08.522 ************************************ 00:15:08.522 START TEST nvmf_lvs_grow 00:15:08.522 ************************************ 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:08.522 * Looking for test storage... 00:15:08.522 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.522 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.782 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:08.782 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:08.782 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:08.782 13:33:21 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.782 13:33:21 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.782 13:33:21 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.782 13:33:21 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.782 13:33:21 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.782 13:33:21 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.782 13:33:21 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:08.783 Cannot find device "nvmf_tgt_br" 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:08.783 Cannot find device "nvmf_tgt_br2" 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:08.783 Cannot find device "nvmf_tgt_br" 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:08.783 Cannot find device "nvmf_tgt_br2" 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:08.783 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:08.783 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:08.783 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:09.041 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:09.041 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:09.041 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:09.041 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:09.041 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:09.041 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:09.041 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:09.041 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:09.041 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:09.041 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:09.041 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:09.041 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:09.041 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:09.041 13:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:09.041 13:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:09.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:09.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:15:09.041 00:15:09.042 --- 10.0.0.2 ping statistics --- 00:15:09.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.042 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:15:09.042 13:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:09.042 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:09.042 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:15:09.042 00:15:09.042 --- 10.0.0.3 ping statistics --- 00:15:09.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.042 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:15:09.042 13:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:09.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:09.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:15:09.042 00:15:09.042 --- 10.0.0.1 ping statistics --- 00:15:09.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.042 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:09.042 13:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:09.042 13:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:15:09.042 13:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:09.042 13:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:09.042 13:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:09.042 13:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:09.042 13:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:09.042 13:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:09.042 13:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:09.042 13:33:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:15:09.042 13:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:09.042 13:33:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:09.042 13:33:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:09.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.042 13:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=78681 00:15:09.042 13:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:09.042 13:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 78681 00:15:09.042 13:33:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 78681 ']' 00:15:09.042 13:33:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.042 13:33:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:09.042 13:33:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.042 13:33:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:09.042 13:33:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:09.042 [2024-05-15 13:33:22.099371] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:15:09.042 [2024-05-15 13:33:22.099709] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.300 [2024-05-15 13:33:22.228229] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:09.300 [2024-05-15 13:33:22.246340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.300 [2024-05-15 13:33:22.300736] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.300 [2024-05-15 13:33:22.301025] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.300 [2024-05-15 13:33:22.301142] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:09.300 [2024-05-15 13:33:22.301291] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:09.300 [2024-05-15 13:33:22.301331] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.300 [2024-05-15 13:33:22.301433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.235 13:33:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:10.235 13:33:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:15:10.235 13:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:10.235 13:33:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:10.235 13:33:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:10.235 13:33:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.235 13:33:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:10.235 [2024-05-15 13:33:23.313590] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:10.495 13:33:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:15:10.495 13:33:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:10.495 13:33:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:10.495 13:33:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:10.495 ************************************ 00:15:10.495 START TEST lvs_grow_clean 00:15:10.495 ************************************ 00:15:10.495 13:33:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:15:10.495 13:33:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:10.495 13:33:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:10.495 13:33:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:10.495 13:33:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:10.495 13:33:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:10.495 13:33:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:10.495 13:33:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:10.495 13:33:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:10.495 13:33:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:10.754 13:33:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:10.754 13:33:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:11.013 13:33:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=85b0a3b1-50a7-4a1b-a06c-2b07dd0f0e2a 00:15:11.013 13:33:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85b0a3b1-50a7-4a1b-a06c-2b07dd0f0e2a 00:15:11.013 13:33:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:11.013 13:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:11.013 13:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:11.013 13:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 85b0a3b1-50a7-4a1b-a06c-2b07dd0f0e2a lvol 150 00:15:11.271 13:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d83dd082-4848-41aa-acd0-68d8c36d94d2 00:15:11.271 13:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:11.271 13:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:11.531 [2024-05-15 13:33:24.547998] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:11.531 [2024-05-15 13:33:24.548378] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:11.531 true 00:15:11.531 13:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85b0a3b1-50a7-4a1b-a06c-2b07dd0f0e2a 00:15:11.531 13:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:11.789 13:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:11.789 13:33:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:12.047 13:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d83dd082-4848-41aa-acd0-68d8c36d94d2 00:15:12.312 13:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:12.875 [2024-05-15 13:33:25.676411] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:12.875 [2024-05-15 13:33:25.677018] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.875 13:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:12.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:12.875 13:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=78764 00:15:12.875 13:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:12.875 13:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:12.875 13:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 78764 /var/tmp/bdevperf.sock 00:15:12.875 13:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 78764 ']' 00:15:12.875 13:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:12.875 13:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:12.875 13:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:12.875 13:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:12.875 13:33:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:12.875 [2024-05-15 13:33:25.957947] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:15:12.876 [2024-05-15 13:33:25.958682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78764 ] 00:15:13.133 [2024-05-15 13:33:26.080337] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:13.133 [2024-05-15 13:33:26.095734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.133 [2024-05-15 13:33:26.153847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.066 13:33:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:14.066 13:33:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:15:14.066 13:33:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:14.324 Nvme0n1 00:15:14.324 13:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:14.581 [ 00:15:14.581 { 00:15:14.581 "name": "Nvme0n1", 00:15:14.581 "aliases": [ 00:15:14.581 "d83dd082-4848-41aa-acd0-68d8c36d94d2" 00:15:14.581 ], 00:15:14.581 "product_name": "NVMe disk", 00:15:14.581 "block_size": 4096, 00:15:14.581 "num_blocks": 38912, 00:15:14.581 "uuid": "d83dd082-4848-41aa-acd0-68d8c36d94d2", 00:15:14.581 "assigned_rate_limits": { 00:15:14.581 "rw_ios_per_sec": 0, 00:15:14.581 "rw_mbytes_per_sec": 0, 00:15:14.581 "r_mbytes_per_sec": 0, 00:15:14.581 "w_mbytes_per_sec": 0 00:15:14.582 }, 00:15:14.582 "claimed": false, 00:15:14.582 "zoned": false, 00:15:14.582 "supported_io_types": { 00:15:14.582 "read": true, 00:15:14.582 "write": true, 00:15:14.582 "unmap": true, 00:15:14.582 "write_zeroes": true, 00:15:14.582 "flush": true, 00:15:14.582 "reset": true, 00:15:14.582 "compare": true, 00:15:14.582 "compare_and_write": true, 00:15:14.582 "abort": true, 00:15:14.582 "nvme_admin": true, 00:15:14.582 "nvme_io": true 00:15:14.582 }, 00:15:14.582 "memory_domains": [ 00:15:14.582 { 00:15:14.582 "dma_device_id": "system", 00:15:14.582 "dma_device_type": 1 00:15:14.582 } 00:15:14.582 ], 00:15:14.582 "driver_specific": { 00:15:14.582 "nvme": [ 00:15:14.582 { 00:15:14.582 "trid": { 00:15:14.582 "trtype": "TCP", 00:15:14.582 "adrfam": "IPv4", 00:15:14.582 "traddr": "10.0.0.2", 00:15:14.582 "trsvcid": "4420", 00:15:14.582 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:14.582 }, 00:15:14.582 "ctrlr_data": { 00:15:14.582 "cntlid": 1, 00:15:14.582 "vendor_id": "0x8086", 00:15:14.582 "model_number": "SPDK bdev Controller", 00:15:14.582 "serial_number": "SPDK0", 00:15:14.582 "firmware_revision": "24.05", 00:15:14.582 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:14.582 "oacs": { 00:15:14.582 "security": 0, 00:15:14.582 "format": 0, 00:15:14.582 "firmware": 0, 00:15:14.582 "ns_manage": 0 00:15:14.582 }, 00:15:14.582 "multi_ctrlr": true, 00:15:14.582 "ana_reporting": false 00:15:14.582 }, 00:15:14.582 "vs": { 00:15:14.582 "nvme_version": "1.3" 00:15:14.582 }, 00:15:14.582 "ns_data": { 00:15:14.582 "id": 1, 00:15:14.582 "can_share": true 00:15:14.582 } 00:15:14.582 } 00:15:14.582 ], 00:15:14.582 "mp_policy": "active_passive" 00:15:14.582 } 00:15:14.582 } 00:15:14.582 ] 00:15:14.582 13:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:14.582 13:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=78787 00:15:14.582 13:33:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:14.582 Running I/O for 10 seconds... 00:15:15.952 Latency(us) 00:15:15.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:15.952 Nvme0n1 : 1.00 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:15:15.952 =================================================================================================================== 00:15:15.952 Total : 10414.00 40.68 0.00 0.00 0.00 0.00 0.00 00:15:15.952 00:15:16.516 13:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 85b0a3b1-50a7-4a1b-a06c-2b07dd0f0e2a 00:15:16.773 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:16.773 Nvme0n1 : 2.00 10795.00 42.17 0.00 0.00 0.00 0.00 0.00 00:15:16.773 =================================================================================================================== 00:15:16.773 Total : 10795.00 42.17 0.00 0.00 0.00 0.00 0.00 00:15:16.773 00:15:16.773 true 00:15:16.773 13:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:16.773 13:33:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85b0a3b1-50a7-4a1b-a06c-2b07dd0f0e2a 00:15:17.031 13:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:17.031 13:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:17.031 13:33:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 78787 00:15:17.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:17.596 Nvme0n1 : 3.00 10879.67 42.50 0.00 0.00 0.00 0.00 0.00 00:15:17.596 =================================================================================================================== 00:15:17.596 Total : 10879.67 42.50 0.00 0.00 0.00 0.00 0.00 00:15:17.596 00:15:18.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:18.527 Nvme0n1 : 4.00 10953.75 42.79 0.00 0.00 0.00 0.00 0.00 00:15:18.527 =================================================================================================================== 00:15:18.527 Total : 10953.75 42.79 0.00 0.00 0.00 0.00 0.00 00:15:18.527 00:15:19.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:19.900 Nvme0n1 : 5.00 10972.80 42.86 0.00 0.00 0.00 0.00 0.00 00:15:19.900 =================================================================================================================== 00:15:19.900 Total : 10972.80 42.86 0.00 0.00 0.00 0.00 0.00 00:15:19.900 00:15:20.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:20.868 Nvme0n1 : 6.00 10997.00 42.96 0.00 0.00 0.00 0.00 0.00 00:15:20.868 =================================================================================================================== 00:15:20.868 Total : 10997.00 42.96 0.00 0.00 0.00 0.00 0.00 00:15:20.868 00:15:21.804 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:21.804 Nvme0n1 : 7.00 10700.43 41.80 0.00 0.00 0.00 0.00 0.00 00:15:21.804 =================================================================================================================== 00:15:21.804 Total : 10700.43 41.80 0.00 0.00 0.00 0.00 0.00 00:15:21.804 00:15:22.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:22.737 Nvme0n1 : 8.00 10759.88 42.03 0.00 0.00 0.00 0.00 0.00 00:15:22.737 =================================================================================================================== 00:15:22.737 Total : 10759.88 42.03 0.00 0.00 0.00 0.00 0.00 00:15:22.737 00:15:23.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:23.670 Nvme0n1 : 9.00 10792.00 42.16 0.00 0.00 0.00 0.00 0.00 00:15:23.670 =================================================================================================================== 00:15:23.670 Total : 10792.00 42.16 0.00 0.00 0.00 0.00 0.00 00:15:23.670 00:15:24.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:24.604 Nvme0n1 : 10.00 10817.70 42.26 0.00 0.00 0.00 0.00 0.00 00:15:24.604 =================================================================================================================== 00:15:24.604 Total : 10817.70 42.26 0.00 0.00 0.00 0.00 0.00 00:15:24.604 00:15:24.604 00:15:24.604 Latency(us) 00:15:24.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:24.604 Nvme0n1 : 10.01 10822.56 42.28 0.00 0.00 11822.47 5149.26 218702.99 00:15:24.604 =================================================================================================================== 00:15:24.604 Total : 10822.56 42.28 0.00 0.00 11822.47 5149.26 218702.99 00:15:24.604 0 00:15:24.604 13:33:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 78764 00:15:24.604 13:33:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 78764 ']' 00:15:24.604 13:33:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 78764 00:15:24.604 13:33:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:15:24.604 13:33:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:24.604 13:33:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78764 00:15:24.604 13:33:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:24.604 13:33:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:24.604 13:33:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78764' 00:15:24.604 killing process with pid 78764 00:15:24.604 13:33:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 78764 00:15:24.604 Received shutdown signal, test time was about 10.000000 seconds 00:15:24.604 00:15:24.604 Latency(us) 00:15:24.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.604 =================================================================================================================== 00:15:24.604 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:24.604 13:33:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 78764 00:15:24.861 13:33:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:25.119 13:33:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:25.376 13:33:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:25.376 13:33:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85b0a3b1-50a7-4a1b-a06c-2b07dd0f0e2a 00:15:25.940 13:33:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:25.940 13:33:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:25.940 13:33:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:25.940 [2024-05-15 13:33:39.006707] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:26.198 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85b0a3b1-50a7-4a1b-a06c-2b07dd0f0e2a 00:15:26.198 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:15:26.198 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85b0a3b1-50a7-4a1b-a06c-2b07dd0f0e2a 00:15:26.198 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:26.198 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.198 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:26.198 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.198 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:26.198 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.198 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:26.198 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:26.198 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85b0a3b1-50a7-4a1b-a06c-2b07dd0f0e2a 00:15:26.455 request: 00:15:26.455 { 00:15:26.455 "uuid": "85b0a3b1-50a7-4a1b-a06c-2b07dd0f0e2a", 00:15:26.455 "method": "bdev_lvol_get_lvstores", 00:15:26.455 "req_id": 1 00:15:26.455 } 00:15:26.455 Got JSON-RPC error response 00:15:26.455 response: 00:15:26.455 { 00:15:26.455 "code": -19, 00:15:26.455 "message": "No such device" 00:15:26.455 } 00:15:26.455 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:15:26.455 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:26.455 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:26.455 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:26.455 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:26.713 aio_bdev 00:15:26.713 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d83dd082-4848-41aa-acd0-68d8c36d94d2 00:15:26.713 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=d83dd082-4848-41aa-acd0-68d8c36d94d2 00:15:26.713 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:26.713 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:15:26.713 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:26.713 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:26.713 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:26.971 13:33:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d83dd082-4848-41aa-acd0-68d8c36d94d2 -t 2000 00:15:26.971 [ 00:15:26.971 { 00:15:26.971 "name": "d83dd082-4848-41aa-acd0-68d8c36d94d2", 00:15:26.971 "aliases": [ 00:15:26.971 "lvs/lvol" 00:15:26.971 ], 00:15:26.971 "product_name": "Logical Volume", 00:15:26.971 "block_size": 4096, 00:15:26.971 "num_blocks": 38912, 00:15:26.971 "uuid": "d83dd082-4848-41aa-acd0-68d8c36d94d2", 00:15:26.971 "assigned_rate_limits": { 00:15:26.971 "rw_ios_per_sec": 0, 00:15:26.971 "rw_mbytes_per_sec": 0, 00:15:26.971 "r_mbytes_per_sec": 0, 00:15:26.971 "w_mbytes_per_sec": 0 00:15:26.971 }, 00:15:26.971 "claimed": false, 00:15:26.972 "zoned": false, 00:15:26.972 "supported_io_types": { 00:15:26.972 "read": true, 00:15:26.972 "write": true, 00:15:26.972 "unmap": true, 00:15:26.972 "write_zeroes": true, 00:15:26.972 "flush": false, 00:15:26.972 "reset": true, 00:15:26.972 "compare": false, 00:15:26.972 "compare_and_write": false, 00:15:26.972 "abort": false, 00:15:26.972 "nvme_admin": false, 00:15:26.972 "nvme_io": false 00:15:26.972 }, 00:15:26.972 "driver_specific": { 00:15:26.972 "lvol": { 00:15:26.972 "lvol_store_uuid": "85b0a3b1-50a7-4a1b-a06c-2b07dd0f0e2a", 00:15:26.972 "base_bdev": "aio_bdev", 00:15:26.972 "thin_provision": false, 00:15:26.972 "num_allocated_clusters": 38, 00:15:26.972 "snapshot": false, 00:15:26.972 "clone": false, 00:15:26.972 "esnap_clone": false 00:15:26.972 } 00:15:26.972 } 00:15:26.972 } 00:15:26.972 ] 00:15:26.972 13:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:15:26.972 13:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85b0a3b1-50a7-4a1b-a06c-2b07dd0f0e2a 00:15:26.972 13:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:27.263 13:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:27.263 13:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85b0a3b1-50a7-4a1b-a06c-2b07dd0f0e2a 00:15:27.263 13:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:27.521 13:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:27.521 13:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d83dd082-4848-41aa-acd0-68d8c36d94d2 00:15:27.779 13:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 85b0a3b1-50a7-4a1b-a06c-2b07dd0f0e2a 00:15:28.036 13:33:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:28.294 13:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:28.553 ************************************ 00:15:28.553 END TEST lvs_grow_clean 00:15:28.553 ************************************ 00:15:28.553 00:15:28.553 real 0m18.293s 00:15:28.553 user 0m16.563s 00:15:28.553 sys 0m3.129s 00:15:28.553 13:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:28.553 13:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:28.811 13:33:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:28.811 13:33:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:28.811 13:33:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:28.811 13:33:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:28.811 ************************************ 00:15:28.811 START TEST lvs_grow_dirty 00:15:28.811 ************************************ 00:15:28.811 13:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:15:28.811 13:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:28.811 13:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:28.811 13:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:28.811 13:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:28.811 13:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:28.811 13:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:28.811 13:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:28.811 13:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:28.811 13:33:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:29.068 13:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:29.068 13:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:29.325 13:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=19ec8d52-f16d-4007-a6ec-b832184a906d 00:15:29.325 13:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19ec8d52-f16d-4007-a6ec-b832184a906d 00:15:29.325 13:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:29.606 13:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:29.606 13:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:29.606 13:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 19ec8d52-f16d-4007-a6ec-b832184a906d lvol 150 00:15:29.863 13:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5164919b-6ba7-4090-8e5c-3b8b591d2096 00:15:29.863 13:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:29.863 13:33:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:30.149 [2024-05-15 13:33:43.068948] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:30.149 [2024-05-15 13:33:43.069209] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:30.149 true 00:15:30.149 13:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19ec8d52-f16d-4007-a6ec-b832184a906d 00:15:30.149 13:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:30.442 13:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:30.442 13:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:30.698 13:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5164919b-6ba7-4090-8e5c-3b8b591d2096 00:15:30.955 13:33:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:30.955 [2024-05-15 13:33:44.013413] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:30.955 13:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:31.212 13:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=79033 00:15:31.212 13:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:31.212 13:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:31.212 13:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 79033 /var/tmp/bdevperf.sock 00:15:31.212 13:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 79033 ']' 00:15:31.212 13:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:31.212 13:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:31.212 13:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:31.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:31.212 13:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:31.212 13:33:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:31.469 [2024-05-15 13:33:44.316851] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:15:31.469 [2024-05-15 13:33:44.318277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79033 ] 00:15:31.470 [2024-05-15 13:33:44.449513] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:31.470 [2024-05-15 13:33:44.470602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.470 [2024-05-15 13:33:44.527381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.403 13:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:32.403 13:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:15:32.403 13:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:32.403 Nvme0n1 00:15:32.403 13:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:32.661 [ 00:15:32.661 { 00:15:32.661 "name": "Nvme0n1", 00:15:32.661 "aliases": [ 00:15:32.661 "5164919b-6ba7-4090-8e5c-3b8b591d2096" 00:15:32.661 ], 00:15:32.661 "product_name": "NVMe disk", 00:15:32.661 "block_size": 4096, 00:15:32.661 "num_blocks": 38912, 00:15:32.661 "uuid": "5164919b-6ba7-4090-8e5c-3b8b591d2096", 00:15:32.661 "assigned_rate_limits": { 00:15:32.661 "rw_ios_per_sec": 0, 00:15:32.661 "rw_mbytes_per_sec": 0, 00:15:32.661 "r_mbytes_per_sec": 0, 00:15:32.661 "w_mbytes_per_sec": 0 00:15:32.661 }, 00:15:32.661 "claimed": false, 00:15:32.661 "zoned": false, 00:15:32.661 "supported_io_types": { 00:15:32.661 "read": true, 00:15:32.661 "write": true, 00:15:32.661 "unmap": true, 00:15:32.661 "write_zeroes": true, 00:15:32.661 "flush": true, 00:15:32.661 "reset": true, 00:15:32.661 "compare": true, 00:15:32.661 "compare_and_write": true, 00:15:32.661 "abort": true, 00:15:32.661 "nvme_admin": true, 00:15:32.661 "nvme_io": true 00:15:32.661 }, 00:15:32.661 "memory_domains": [ 00:15:32.661 { 00:15:32.661 "dma_device_id": "system", 00:15:32.661 "dma_device_type": 1 00:15:32.661 } 00:15:32.661 ], 00:15:32.661 "driver_specific": { 00:15:32.661 "nvme": [ 00:15:32.661 { 00:15:32.661 "trid": { 00:15:32.661 "trtype": "TCP", 00:15:32.661 "adrfam": "IPv4", 00:15:32.661 "traddr": "10.0.0.2", 00:15:32.661 "trsvcid": "4420", 00:15:32.661 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:32.661 }, 00:15:32.662 "ctrlr_data": { 00:15:32.662 "cntlid": 1, 00:15:32.662 "vendor_id": "0x8086", 00:15:32.662 "model_number": "SPDK bdev Controller", 00:15:32.662 "serial_number": "SPDK0", 00:15:32.662 "firmware_revision": "24.05", 00:15:32.662 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:32.662 "oacs": { 00:15:32.662 "security": 0, 00:15:32.662 "format": 0, 00:15:32.662 "firmware": 0, 00:15:32.662 "ns_manage": 0 00:15:32.662 }, 00:15:32.662 "multi_ctrlr": true, 00:15:32.662 "ana_reporting": false 00:15:32.662 }, 00:15:32.662 "vs": { 00:15:32.662 "nvme_version": "1.3" 00:15:32.662 }, 00:15:32.662 "ns_data": { 00:15:32.662 "id": 1, 00:15:32.662 "can_share": true 00:15:32.662 } 00:15:32.662 } 00:15:32.662 ], 00:15:32.662 "mp_policy": "active_passive" 00:15:32.662 } 00:15:32.662 } 00:15:32.662 ] 00:15:32.662 13:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=79051 00:15:32.662 13:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:32.662 13:33:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:32.662 Running I/O for 10 seconds... 00:15:34.036 Latency(us) 00:15:34.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.036 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:34.036 Nvme0n1 : 1.00 11049.00 43.16 0.00 0.00 0.00 0.00 0.00 00:15:34.036 =================================================================================================================== 00:15:34.036 Total : 11049.00 43.16 0.00 0.00 0.00 0.00 0.00 00:15:34.036 00:15:34.602 13:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 19ec8d52-f16d-4007-a6ec-b832184a906d 00:15:34.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:34.861 Nvme0n1 : 2.00 10985.50 42.91 0.00 0.00 0.00 0.00 0.00 00:15:34.861 =================================================================================================================== 00:15:34.861 Total : 10985.50 42.91 0.00 0.00 0.00 0.00 0.00 00:15:34.861 00:15:34.861 true 00:15:35.119 13:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19ec8d52-f16d-4007-a6ec-b832184a906d 00:15:35.119 13:33:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:35.378 13:33:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:35.378 13:33:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:35.378 13:33:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 79051 00:15:35.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:35.635 Nvme0n1 : 3.00 10746.00 41.98 0.00 0.00 0.00 0.00 0.00 00:15:35.635 =================================================================================================================== 00:15:35.635 Total : 10746.00 41.98 0.00 0.00 0.00 0.00 0.00 00:15:35.635 00:15:37.009 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:37.009 Nvme0n1 : 4.00 10758.25 42.02 0.00 0.00 0.00 0.00 0.00 00:15:37.009 =================================================================================================================== 00:15:37.009 Total : 10758.25 42.02 0.00 0.00 0.00 0.00 0.00 00:15:37.009 00:15:37.944 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:37.944 Nvme0n1 : 5.00 10765.60 42.05 0.00 0.00 0.00 0.00 0.00 00:15:37.944 =================================================================================================================== 00:15:37.944 Total : 10765.60 42.05 0.00 0.00 0.00 0.00 0.00 00:15:37.944 00:15:38.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:38.880 Nvme0n1 : 6.00 10770.50 42.07 0.00 0.00 0.00 0.00 0.00 00:15:38.880 =================================================================================================================== 00:15:38.880 Total : 10770.50 42.07 0.00 0.00 0.00 0.00 0.00 00:15:38.880 00:15:39.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:39.813 Nvme0n1 : 7.00 10535.57 41.15 0.00 0.00 0.00 0.00 0.00 00:15:39.813 =================================================================================================================== 00:15:39.813 Total : 10535.57 41.15 0.00 0.00 0.00 0.00 0.00 00:15:39.813 00:15:40.748 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:40.748 Nvme0n1 : 8.00 10536.25 41.16 0.00 0.00 0.00 0.00 0.00 00:15:40.748 =================================================================================================================== 00:15:40.748 Total : 10536.25 41.16 0.00 0.00 0.00 0.00 0.00 00:15:40.748 00:15:41.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:41.737 Nvme0n1 : 9.00 10536.78 41.16 0.00 0.00 0.00 0.00 0.00 00:15:41.737 =================================================================================================================== 00:15:41.737 Total : 10536.78 41.16 0.00 0.00 0.00 0.00 0.00 00:15:41.737 00:15:42.672 00:15:42.672 Latency(us) 00:15:42.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.672 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:42.672 Nvme0n1 : 10.00 10432.58 40.75 0.00 0.00 12264.46 5430.13 164776.23 00:15:42.672 =================================================================================================================== 00:15:42.672 Total : 10432.58 40.75 0.00 0.00 12264.46 5430.13 164776.23 00:15:42.672 0 00:15:42.672 13:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 79033 00:15:42.672 13:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 79033 ']' 00:15:42.672 13:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 79033 00:15:42.672 13:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:15:42.672 13:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:42.672 13:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79033 00:15:42.930 killing process with pid 79033 00:15:42.930 Received shutdown signal, test time was about 10.000000 seconds 00:15:42.930 00:15:42.930 Latency(us) 00:15:42.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:42.930 =================================================================================================================== 00:15:42.930 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:42.930 13:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:42.930 13:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:42.930 13:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79033' 00:15:42.930 13:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 79033 00:15:42.930 13:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 79033 00:15:42.930 13:33:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:43.496 13:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:43.496 13:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19ec8d52-f16d-4007-a6ec-b832184a906d 00:15:43.496 13:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:43.754 13:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:43.754 13:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:43.754 13:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 78681 00:15:43.754 13:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 78681 00:15:43.754 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 78681 Killed "${NVMF_APP[@]}" "$@" 00:15:43.754 13:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:43.754 13:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:43.754 13:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:43.754 13:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:43.754 13:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:43.754 13:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=79189 00:15:43.754 13:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:43.754 13:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 79189 00:15:43.754 13:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 79189 ']' 00:15:43.754 13:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.755 13:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:43.755 13:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.755 13:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:43.755 13:33:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:44.013 [2024-05-15 13:33:56.914415] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:15:44.013 [2024-05-15 13:33:56.914550] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.013 [2024-05-15 13:33:57.048841] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:44.013 [2024-05-15 13:33:57.067627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.280 [2024-05-15 13:33:57.121699] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.280 [2024-05-15 13:33:57.121946] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.280 [2024-05-15 13:33:57.122046] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:44.280 [2024-05-15 13:33:57.122096] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:44.280 [2024-05-15 13:33:57.122168] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.280 [2024-05-15 13:33:57.122228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.280 13:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:44.280 13:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:15:44.280 13:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:44.280 13:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:44.280 13:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:44.280 13:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:44.280 13:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:44.572 [2024-05-15 13:33:57.509014] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:44.572 [2024-05-15 13:33:57.509493] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:44.572 [2024-05-15 13:33:57.514227] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:44.572 13:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:44.572 13:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5164919b-6ba7-4090-8e5c-3b8b591d2096 00:15:44.572 13:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=5164919b-6ba7-4090-8e5c-3b8b591d2096 00:15:44.572 13:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:44.572 13:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:15:44.572 13:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:44.572 13:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:44.572 13:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:44.830 13:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5164919b-6ba7-4090-8e5c-3b8b591d2096 -t 2000 00:15:45.088 [ 00:15:45.088 { 00:15:45.088 "name": "5164919b-6ba7-4090-8e5c-3b8b591d2096", 00:15:45.088 "aliases": [ 00:15:45.088 "lvs/lvol" 00:15:45.088 ], 00:15:45.088 "product_name": "Logical Volume", 00:15:45.088 "block_size": 4096, 00:15:45.088 "num_blocks": 38912, 00:15:45.088 "uuid": "5164919b-6ba7-4090-8e5c-3b8b591d2096", 00:15:45.088 "assigned_rate_limits": { 00:15:45.088 "rw_ios_per_sec": 0, 00:15:45.088 "rw_mbytes_per_sec": 0, 00:15:45.088 "r_mbytes_per_sec": 0, 00:15:45.088 "w_mbytes_per_sec": 0 00:15:45.088 }, 00:15:45.088 "claimed": false, 00:15:45.088 "zoned": false, 00:15:45.088 "supported_io_types": { 00:15:45.088 "read": true, 00:15:45.088 "write": true, 00:15:45.088 "unmap": true, 00:15:45.088 "write_zeroes": true, 00:15:45.088 "flush": false, 00:15:45.088 "reset": true, 00:15:45.088 "compare": false, 00:15:45.088 "compare_and_write": false, 00:15:45.088 "abort": false, 00:15:45.088 "nvme_admin": false, 00:15:45.088 "nvme_io": false 00:15:45.088 }, 00:15:45.088 "driver_specific": { 00:15:45.088 "lvol": { 00:15:45.088 "lvol_store_uuid": "19ec8d52-f16d-4007-a6ec-b832184a906d", 00:15:45.088 "base_bdev": "aio_bdev", 00:15:45.088 "thin_provision": false, 00:15:45.088 "num_allocated_clusters": 38, 00:15:45.088 "snapshot": false, 00:15:45.088 "clone": false, 00:15:45.088 "esnap_clone": false 00:15:45.088 } 00:15:45.088 } 00:15:45.088 } 00:15:45.088 ] 00:15:45.088 13:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:15:45.088 13:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:45.088 13:33:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19ec8d52-f16d-4007-a6ec-b832184a906d 00:15:45.346 13:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:45.346 13:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19ec8d52-f16d-4007-a6ec-b832184a906d 00:15:45.346 13:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:45.346 13:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:45.346 13:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:45.604 [2024-05-15 13:33:58.666466] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:45.862 13:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19ec8d52-f16d-4007-a6ec-b832184a906d 00:15:45.862 13:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:15:45.862 13:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19ec8d52-f16d-4007-a6ec-b832184a906d 00:15:45.862 13:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.862 13:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:45.863 13:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.863 13:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:45.863 13:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.863 13:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:45.863 13:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:45.863 13:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:45.863 13:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19ec8d52-f16d-4007-a6ec-b832184a906d 00:15:46.122 request: 00:15:46.122 { 00:15:46.122 "uuid": "19ec8d52-f16d-4007-a6ec-b832184a906d", 00:15:46.122 "method": "bdev_lvol_get_lvstores", 00:15:46.122 "req_id": 1 00:15:46.122 } 00:15:46.122 Got JSON-RPC error response 00:15:46.122 response: 00:15:46.122 { 00:15:46.122 "code": -19, 00:15:46.122 "message": "No such device" 00:15:46.122 } 00:15:46.122 13:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:15:46.122 13:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:46.122 13:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:46.122 13:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:46.122 13:33:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:46.122 aio_bdev 00:15:46.122 13:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5164919b-6ba7-4090-8e5c-3b8b591d2096 00:15:46.122 13:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=5164919b-6ba7-4090-8e5c-3b8b591d2096 00:15:46.122 13:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:46.122 13:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:15:46.122 13:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:46.122 13:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:46.122 13:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:46.691 13:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5164919b-6ba7-4090-8e5c-3b8b591d2096 -t 2000 00:15:46.691 [ 00:15:46.691 { 00:15:46.691 "name": "5164919b-6ba7-4090-8e5c-3b8b591d2096", 00:15:46.691 "aliases": [ 00:15:46.691 "lvs/lvol" 00:15:46.691 ], 00:15:46.691 "product_name": "Logical Volume", 00:15:46.691 "block_size": 4096, 00:15:46.691 "num_blocks": 38912, 00:15:46.691 "uuid": "5164919b-6ba7-4090-8e5c-3b8b591d2096", 00:15:46.691 "assigned_rate_limits": { 00:15:46.691 "rw_ios_per_sec": 0, 00:15:46.691 "rw_mbytes_per_sec": 0, 00:15:46.691 "r_mbytes_per_sec": 0, 00:15:46.691 "w_mbytes_per_sec": 0 00:15:46.691 }, 00:15:46.691 "claimed": false, 00:15:46.691 "zoned": false, 00:15:46.691 "supported_io_types": { 00:15:46.692 "read": true, 00:15:46.692 "write": true, 00:15:46.692 "unmap": true, 00:15:46.692 "write_zeroes": true, 00:15:46.692 "flush": false, 00:15:46.692 "reset": true, 00:15:46.692 "compare": false, 00:15:46.692 "compare_and_write": false, 00:15:46.692 "abort": false, 00:15:46.692 "nvme_admin": false, 00:15:46.692 "nvme_io": false 00:15:46.692 }, 00:15:46.692 "driver_specific": { 00:15:46.692 "lvol": { 00:15:46.692 "lvol_store_uuid": "19ec8d52-f16d-4007-a6ec-b832184a906d", 00:15:46.692 "base_bdev": "aio_bdev", 00:15:46.692 "thin_provision": false, 00:15:46.692 "num_allocated_clusters": 38, 00:15:46.692 "snapshot": false, 00:15:46.692 "clone": false, 00:15:46.692 "esnap_clone": false 00:15:46.692 } 00:15:46.692 } 00:15:46.692 } 00:15:46.692 ] 00:15:46.692 13:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:15:46.692 13:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19ec8d52-f16d-4007-a6ec-b832184a906d 00:15:46.692 13:33:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:46.953 13:34:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:46.953 13:34:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 19ec8d52-f16d-4007-a6ec-b832184a906d 00:15:46.953 13:34:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:47.211 13:34:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:47.211 13:34:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5164919b-6ba7-4090-8e5c-3b8b591d2096 00:15:47.493 13:34:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 19ec8d52-f16d-4007-a6ec-b832184a906d 00:15:47.752 13:34:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:48.011 13:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:15:48.739 ************************************ 00:15:48.739 END TEST lvs_grow_dirty 00:15:48.739 ************************************ 00:15:48.739 00:15:48.739 real 0m19.750s 00:15:48.739 user 0m44.124s 00:15:48.739 sys 0m9.370s 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:48.739 nvmf_trace.0 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:48.739 rmmod nvme_tcp 00:15:48.739 rmmod nvme_fabrics 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 79189 ']' 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 79189 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 79189 ']' 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 79189 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79189 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79189' 00:15:48.739 killing process with pid 79189 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 79189 00:15:48.739 13:34:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 79189 00:15:48.997 13:34:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:48.997 13:34:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:48.997 13:34:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:48.997 13:34:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:48.997 13:34:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:48.997 13:34:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.997 13:34:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.997 13:34:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.997 13:34:01 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:48.997 00:15:48.997 real 0m40.481s 00:15:48.997 user 1m6.301s 00:15:48.997 sys 0m13.252s 00:15:48.997 13:34:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:48.997 13:34:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:48.997 ************************************ 00:15:48.997 END TEST nvmf_lvs_grow 00:15:48.997 ************************************ 00:15:48.997 13:34:02 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:48.997 13:34:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:48.997 13:34:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:48.997 13:34:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:48.997 ************************************ 00:15:48.997 START TEST nvmf_bdev_io_wait 00:15:48.997 ************************************ 00:15:48.997 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:49.257 * Looking for test storage... 00:15:49.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:49.257 Cannot find device "nvmf_tgt_br" 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:49.257 Cannot find device "nvmf_tgt_br2" 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:49.257 Cannot find device "nvmf_tgt_br" 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:49.257 Cannot find device "nvmf_tgt_br2" 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:49.257 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:49.257 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:49.257 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:49.517 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:49.517 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:49.517 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:49.517 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:49.517 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:49.517 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:49.517 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:49.517 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:49.517 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:49.517 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:49.517 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:49.517 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:49.517 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:49.517 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:49.517 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:49.517 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:49.517 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:49.517 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:49.517 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:49.517 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:49.517 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:49.517 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:49.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:15:49.517 00:15:49.517 --- 10.0.0.2 ping statistics --- 00:15:49.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.517 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:15:49.517 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:49.517 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:49.517 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:15:49.517 00:15:49.517 --- 10.0.0.3 ping statistics --- 00:15:49.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.517 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:15:49.517 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:49.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:49.518 00:15:49.518 --- 10.0.0.1 ping statistics --- 00:15:49.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.518 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:49.518 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.518 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:15:49.518 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:49.518 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.518 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:49.518 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:49.518 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.518 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:49.518 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:49.518 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:49.518 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:49.518 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:49.518 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:49.518 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:49.518 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=79492 00:15:49.518 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 79492 00:15:49.518 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 79492 ']' 00:15:49.518 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.518 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:49.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.518 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.518 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:49.518 13:34:02 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:49.776 [2024-05-15 13:34:02.645101] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:15:49.776 [2024-05-15 13:34:02.645198] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.776 [2024-05-15 13:34:02.774384] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:49.776 [2024-05-15 13:34:02.794067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:49.776 [2024-05-15 13:34:02.850873] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.776 [2024-05-15 13:34:02.850932] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.776 [2024-05-15 13:34:02.850947] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.776 [2024-05-15 13:34:02.850960] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.776 [2024-05-15 13:34:02.850972] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.776 [2024-05-15 13:34:02.851078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.776 [2024-05-15 13:34:02.851280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:49.777 [2024-05-15 13:34:02.852022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:49.777 [2024-05-15 13:34:02.852026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.710 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:50.710 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:15:50.710 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:50.710 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:50.710 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:50.710 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.710 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:50.710 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.711 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:50.711 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.711 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:50.711 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.711 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:50.711 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.711 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:50.711 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.711 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:50.711 [2024-05-15 13:34:03.787552] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:50.711 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.711 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:50.711 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.711 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:50.968 Malloc0 00:15:50.968 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.968 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:50.968 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.968 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:50.968 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.968 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:50.968 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.968 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:50.968 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.968 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:50.968 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.968 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:50.968 [2024-05-15 13:34:03.843443] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:50.968 [2024-05-15 13:34:03.843712] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.968 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.968 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=79527 00:15:50.968 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=79529 00:15:50.968 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:50.968 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:50.969 { 00:15:50.969 "params": { 00:15:50.969 "name": "Nvme$subsystem", 00:15:50.969 "trtype": "$TEST_TRANSPORT", 00:15:50.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:50.969 "adrfam": "ipv4", 00:15:50.969 "trsvcid": "$NVMF_PORT", 00:15:50.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:50.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:50.969 "hdgst": ${hdgst:-false}, 00:15:50.969 "ddgst": ${ddgst:-false} 00:15:50.969 }, 00:15:50.969 "method": "bdev_nvme_attach_controller" 00:15:50.969 } 00:15:50.969 EOF 00:15:50.969 )") 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=79531 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:50.969 { 00:15:50.969 "params": { 00:15:50.969 "name": "Nvme$subsystem", 00:15:50.969 "trtype": "$TEST_TRANSPORT", 00:15:50.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:50.969 "adrfam": "ipv4", 00:15:50.969 "trsvcid": "$NVMF_PORT", 00:15:50.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:50.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:50.969 "hdgst": ${hdgst:-false}, 00:15:50.969 "ddgst": ${ddgst:-false} 00:15:50.969 }, 00:15:50.969 "method": "bdev_nvme_attach_controller" 00:15:50.969 } 00:15:50.969 EOF 00:15:50.969 )") 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=79534 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:50.969 "params": { 00:15:50.969 "name": "Nvme1", 00:15:50.969 "trtype": "tcp", 00:15:50.969 "traddr": "10.0.0.2", 00:15:50.969 "adrfam": "ipv4", 00:15:50.969 "trsvcid": "4420", 00:15:50.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:50.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:50.969 "hdgst": false, 00:15:50.969 "ddgst": false 00:15:50.969 }, 00:15:50.969 "method": "bdev_nvme_attach_controller" 00:15:50.969 }' 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:50.969 "params": { 00:15:50.969 "name": "Nvme1", 00:15:50.969 "trtype": "tcp", 00:15:50.969 "traddr": "10.0.0.2", 00:15:50.969 "adrfam": "ipv4", 00:15:50.969 "trsvcid": "4420", 00:15:50.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:50.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:50.969 "hdgst": false, 00:15:50.969 "ddgst": false 00:15:50.969 }, 00:15:50.969 "method": "bdev_nvme_attach_controller" 00:15:50.969 }' 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:50.969 { 00:15:50.969 "params": { 00:15:50.969 "name": "Nvme$subsystem", 00:15:50.969 "trtype": "$TEST_TRANSPORT", 00:15:50.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:50.969 "adrfam": "ipv4", 00:15:50.969 "trsvcid": "$NVMF_PORT", 00:15:50.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:50.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:50.969 "hdgst": ${hdgst:-false}, 00:15:50.969 "ddgst": ${ddgst:-false} 00:15:50.969 }, 00:15:50.969 "method": "bdev_nvme_attach_controller" 00:15:50.969 } 00:15:50.969 EOF 00:15:50.969 )") 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:50.969 { 00:15:50.969 "params": { 00:15:50.969 "name": "Nvme$subsystem", 00:15:50.969 "trtype": "$TEST_TRANSPORT", 00:15:50.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:50.969 "adrfam": "ipv4", 00:15:50.969 "trsvcid": "$NVMF_PORT", 00:15:50.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:50.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:50.969 "hdgst": ${hdgst:-false}, 00:15:50.969 "ddgst": ${ddgst:-false} 00:15:50.969 }, 00:15:50.969 "method": "bdev_nvme_attach_controller" 00:15:50.969 } 00:15:50.969 EOF 00:15:50.969 )") 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 79527 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:50.969 [2024-05-15 13:34:03.904258] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:15:50.969 [2024-05-15 13:34:03.904338] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:50.969 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:50.969 "params": { 00:15:50.969 "name": "Nvme1", 00:15:50.969 "trtype": "tcp", 00:15:50.969 "traddr": "10.0.0.2", 00:15:50.969 "adrfam": "ipv4", 00:15:50.969 "trsvcid": "4420", 00:15:50.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:50.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:50.969 "hdgst": false, 00:15:50.969 "ddgst": false 00:15:50.969 }, 00:15:50.969 "method": "bdev_nvme_attach_controller" 00:15:50.969 }' 00:15:50.969 [2024-05-15 13:34:03.909304] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:15:50.969 [2024-05-15 13:34:03.909591] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:50.969 .cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-ty 13:34:03 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:50.969 "params": { 00:15:50.969 "name": "Nvme1", 00:15:50.969 "trtype": "tcp", 00:15:50.969 "traddr": "10.0.0.2", 00:15:50.969 "adrfam": "ipv4", 00:15:50.969 "trsvcid": "4420", 00:15:50.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:50.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:50.969 "hdgst": false, 00:15:50.969 "ddgst": false 00:15:50.969 }, 00:15:50.969 "method": "bdev_nvme_attach_controller" 00:15:50.969 }' 00:15:50.969 pe=auto ] 00:15:50.969 [2024-05-15 13:34:03.923527] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:15:50.969 [2024-05-15 13:34:03.923609] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:50.969 [2024-05-15 13:34:03.925183] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:15:50.969 [2024-05-15 13:34:03.925289] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:51.227 [2024-05-15 13:34:04.103941] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:51.227 [2024-05-15 13:34:04.123493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.227 [2024-05-15 13:34:04.170499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:51.227 [2024-05-15 13:34:04.170527] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:51.227 [2024-05-15 13:34:04.190612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.227 [2024-05-15 13:34:04.227580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:51.227 [2024-05-15 13:34:04.234326] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:51.227 [2024-05-15 13:34:04.253871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.227 [2024-05-15 13:34:04.283725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:51.227 [2024-05-15 13:34:04.303161] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:51.227 [2024-05-15 13:34:04.323716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.484 Running I/O for 1 seconds... 00:15:51.484 [2024-05-15 13:34:04.353655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:51.484 Running I/O for 1 seconds... 00:15:51.484 Running I/O for 1 seconds... 00:15:51.484 Running I/O for 1 seconds... 00:15:52.416 00:15:52.416 Latency(us) 00:15:52.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.416 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:52.416 Nvme1n1 : 1.00 181537.38 709.13 0.00 0.00 702.58 323.78 1084.46 00:15:52.416 =================================================================================================================== 00:15:52.416 Total : 181537.38 709.13 0.00 0.00 702.58 323.78 1084.46 00:15:52.416 00:15:52.416 Latency(us) 00:15:52.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.416 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:52.416 Nvme1n1 : 1.02 6017.43 23.51 0.00 0.00 21013.80 7739.49 35202.19 00:15:52.416 =================================================================================================================== 00:15:52.416 Total : 6017.43 23.51 0.00 0.00 21013.80 7739.49 35202.19 00:15:52.416 00:15:52.416 Latency(us) 00:15:52.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.416 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:52.416 Nvme1n1 : 1.01 5855.69 22.87 0.00 0.00 21771.61 7396.21 39945.75 00:15:52.416 =================================================================================================================== 00:15:52.416 Total : 5855.69 22.87 0.00 0.00 21771.61 7396.21 39945.75 00:15:52.416 00:15:52.416 Latency(us) 00:15:52.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.416 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:52.416 Nvme1n1 : 1.01 9699.64 37.89 0.00 0.00 13146.66 6709.64 24341.94 00:15:52.416 =================================================================================================================== 00:15:52.416 Total : 9699.64 37.89 0.00 0.00 13146.66 6709.64 24341.94 00:15:52.673 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 79529 00:15:52.673 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 79531 00:15:52.673 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 79534 00:15:52.673 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:52.673 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.673 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:52.673 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.673 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:52.674 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:52.674 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:52.674 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:52.931 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:52.931 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:52.931 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:52.931 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:52.931 rmmod nvme_tcp 00:15:52.931 rmmod nvme_fabrics 00:15:52.931 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:52.931 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:52.931 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:52.931 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 79492 ']' 00:15:52.931 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 79492 00:15:52.931 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 79492 ']' 00:15:52.931 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 79492 00:15:52.931 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:15:52.931 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:52.931 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79492 00:15:52.931 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:52.931 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:52.931 killing process with pid 79492 00:15:52.931 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79492' 00:15:52.931 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 79492 00:15:52.931 [2024-05-15 13:34:05.845434] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:52.931 13:34:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 79492 00:15:52.931 13:34:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:52.931 13:34:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:52.931 13:34:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:52.931 13:34:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:52.931 13:34:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:52.931 13:34:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.931 13:34:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.931 13:34:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.189 13:34:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:53.189 ************************************ 00:15:53.189 END TEST nvmf_bdev_io_wait 00:15:53.189 ************************************ 00:15:53.189 00:15:53.189 real 0m4.016s 00:15:53.189 user 0m17.124s 00:15:53.189 sys 0m2.304s 00:15:53.189 13:34:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:53.189 13:34:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:53.189 13:34:06 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:53.189 13:34:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:53.189 13:34:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:53.189 13:34:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:53.189 ************************************ 00:15:53.189 START TEST nvmf_queue_depth 00:15:53.189 ************************************ 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:53.189 * Looking for test storage... 00:15:53.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:53.189 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:53.448 Cannot find device "nvmf_tgt_br" 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:53.448 Cannot find device "nvmf_tgt_br2" 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:53.448 Cannot find device "nvmf_tgt_br" 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:53.448 Cannot find device "nvmf_tgt_br2" 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:53.448 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:53.448 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:53.448 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:53.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:53.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:15:53.707 00:15:53.707 --- 10.0.0.2 ping statistics --- 00:15:53.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.707 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:53.707 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:53.707 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:15:53.707 00:15:53.707 --- 10.0.0.3 ping statistics --- 00:15:53.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.707 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:53.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:53.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:53.707 00:15:53.707 --- 10.0.0.1 ping statistics --- 00:15:53.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:53.707 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=79766 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 79766 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 79766 ']' 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:53.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:53.707 13:34:06 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:53.707 [2024-05-15 13:34:06.653784] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:15:53.707 [2024-05-15 13:34:06.653864] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.707 [2024-05-15 13:34:06.774421] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:53.707 [2024-05-15 13:34:06.795585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.966 [2024-05-15 13:34:06.851024] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.966 [2024-05-15 13:34:06.851091] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.966 [2024-05-15 13:34:06.851106] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:53.966 [2024-05-15 13:34:06.851119] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:53.966 [2024-05-15 13:34:06.851131] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.966 [2024-05-15 13:34:06.851169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.531 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:54.531 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:15:54.531 13:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:54.531 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:54.531 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:54.531 13:34:07 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.531 13:34:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:54.531 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.531 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:54.531 [2024-05-15 13:34:07.628343] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:54.789 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.789 13:34:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:54.789 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.789 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:54.789 Malloc0 00:15:54.789 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.789 13:34:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:54.789 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.789 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:54.789 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.789 13:34:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:54.789 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.789 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:54.789 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.789 13:34:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:54.789 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.789 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:54.789 [2024-05-15 13:34:07.681192] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:54.789 [2024-05-15 13:34:07.681444] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.789 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.789 13:34:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=79798 00:15:54.789 13:34:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:54.789 13:34:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:54.789 13:34:07 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 79798 /var/tmp/bdevperf.sock 00:15:54.789 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 79798 ']' 00:15:54.789 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:54.789 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:54.790 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:54.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:54.790 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:54.790 13:34:07 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:54.790 [2024-05-15 13:34:07.735828] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:15:54.790 [2024-05-15 13:34:07.735938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79798 ] 00:15:54.790 [2024-05-15 13:34:07.863954] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:54.790 [2024-05-15 13:34:07.886268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.047 [2024-05-15 13:34:07.942093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.614 13:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:55.614 13:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:15:55.614 13:34:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:55.614 13:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.614 13:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:55.975 NVMe0n1 00:15:55.975 13:34:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.975 13:34:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:55.975 Running I/O for 10 seconds... 00:16:05.955 00:16:05.955 Latency(us) 00:16:05.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.955 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:05.955 Verification LBA range: start 0x0 length 0x4000 00:16:05.955 NVMe0n1 : 10.09 9344.62 36.50 0.00 0.00 109128.65 21470.84 78892.86 00:16:05.955 =================================================================================================================== 00:16:05.955 Total : 9344.62 36.50 0.00 0.00 109128.65 21470.84 78892.86 00:16:05.955 0 00:16:05.955 13:34:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 79798 00:16:05.955 13:34:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 79798 ']' 00:16:05.955 13:34:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 79798 00:16:05.955 13:34:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:16:05.955 13:34:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:05.955 13:34:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79798 00:16:06.213 13:34:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:06.213 13:34:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:06.213 killing process with pid 79798 00:16:06.213 13:34:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79798' 00:16:06.213 Received shutdown signal, test time was about 10.000000 seconds 00:16:06.213 00:16:06.213 Latency(us) 00:16:06.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.213 =================================================================================================================== 00:16:06.213 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:06.213 13:34:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 79798 00:16:06.213 13:34:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 79798 00:16:06.213 13:34:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:06.213 13:34:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:06.213 13:34:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:06.213 13:34:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:16:06.213 13:34:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:06.213 13:34:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:16:06.213 13:34:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:06.213 13:34:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:06.213 rmmod nvme_tcp 00:16:06.213 rmmod nvme_fabrics 00:16:06.471 13:34:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:06.471 13:34:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:16:06.471 13:34:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:16:06.471 13:34:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 79766 ']' 00:16:06.471 13:34:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 79766 00:16:06.471 13:34:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 79766 ']' 00:16:06.471 13:34:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 79766 00:16:06.471 13:34:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:16:06.471 13:34:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:06.471 13:34:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79766 00:16:06.471 13:34:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:06.471 13:34:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:06.471 13:34:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79766' 00:16:06.471 killing process with pid 79766 00:16:06.471 13:34:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 79766 00:16:06.471 [2024-05-15 13:34:19.364975] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:06.471 13:34:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 79766 00:16:06.730 13:34:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:06.730 13:34:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:06.730 13:34:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:06.730 13:34:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:06.730 13:34:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:06.730 13:34:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.730 13:34:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.730 13:34:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.730 13:34:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:06.730 00:16:06.730 real 0m13.496s 00:16:06.730 user 0m23.186s 00:16:06.730 sys 0m2.463s 00:16:06.730 13:34:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:06.730 13:34:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:06.730 ************************************ 00:16:06.730 END TEST nvmf_queue_depth 00:16:06.730 ************************************ 00:16:06.730 13:34:19 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:06.730 13:34:19 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:06.730 13:34:19 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:06.730 13:34:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:06.730 ************************************ 00:16:06.730 START TEST nvmf_target_multipath 00:16:06.730 ************************************ 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:06.730 * Looking for test storage... 00:16:06.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:06.730 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:06.989 Cannot find device "nvmf_tgt_br" 00:16:06.989 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:16:06.989 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:06.990 Cannot find device "nvmf_tgt_br2" 00:16:06.990 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:16:06.990 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:06.990 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:06.990 Cannot find device "nvmf_tgt_br" 00:16:06.990 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:16:06.990 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:06.990 Cannot find device "nvmf_tgt_br2" 00:16:06.990 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:16:06.990 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:06.990 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:06.990 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:06.990 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:06.990 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:16:06.990 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:06.990 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:06.990 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:16:06.990 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:06.990 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:06.990 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:06.990 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:06.990 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:06.990 13:34:19 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:06.990 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:06.990 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:06.990 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:06.990 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:06.990 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:06.990 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:06.990 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:06.990 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:06.990 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:06.990 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:06.990 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:07.248 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:07.248 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:07.248 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:07.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:07.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:16:07.249 00:16:07.249 --- 10.0.0.2 ping statistics --- 00:16:07.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.249 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:07.249 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:07.249 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:16:07.249 00:16:07.249 --- 10.0.0.3 ping statistics --- 00:16:07.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.249 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:07.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:07.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:07.249 00:16:07.249 --- 10.0.0.1 ping statistics --- 00:16:07.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.249 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=80119 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 80119 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@827 -- # '[' -z 80119 ']' 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:07.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:07.249 13:34:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:07.249 [2024-05-15 13:34:20.249429] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:16:07.249 [2024-05-15 13:34:20.249534] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.507 [2024-05-15 13:34:20.377864] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:07.507 [2024-05-15 13:34:20.396262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:07.507 [2024-05-15 13:34:20.454765] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.507 [2024-05-15 13:34:20.454830] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.507 [2024-05-15 13:34:20.454844] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.507 [2024-05-15 13:34:20.454858] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.507 [2024-05-15 13:34:20.454869] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.507 [2024-05-15 13:34:20.455159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.508 [2024-05-15 13:34:20.455226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.508 [2024-05-15 13:34:20.456092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:07.508 [2024-05-15 13:34:20.456099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.508 13:34:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:07.508 13:34:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@860 -- # return 0 00:16:07.508 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:07.508 13:34:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:07.508 13:34:20 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:07.508 13:34:20 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.508 13:34:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:07.765 [2024-05-15 13:34:20.843713] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.023 13:34:20 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:08.023 Malloc0 00:16:08.023 13:34:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:16:08.281 13:34:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:08.539 13:34:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:08.798 [2024-05-15 13:34:21.720816] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:08.798 [2024-05-15 13:34:21.721943] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:08.798 13:34:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:09.055 [2024-05-15 13:34:21.933377] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:09.056 13:34:21 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:16:09.056 13:34:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:16:09.314 13:34:22 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:16:09.314 13:34:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1194 -- # local i=0 00:16:09.314 13:34:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:09.314 13:34:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:16:09.314 13:34:22 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # sleep 2 00:16:11.270 13:34:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:11.270 13:34:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:11.270 13:34:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:11.270 13:34:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # return 0 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=80197 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:16:11.271 13:34:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:16:11.271 [global] 00:16:11.271 thread=1 00:16:11.271 invalidate=1 00:16:11.271 rw=randrw 00:16:11.271 time_based=1 00:16:11.271 runtime=6 00:16:11.271 ioengine=libaio 00:16:11.271 direct=1 00:16:11.271 bs=4096 00:16:11.271 iodepth=128 00:16:11.271 norandommap=0 00:16:11.271 numjobs=1 00:16:11.271 00:16:11.271 verify_dump=1 00:16:11.271 verify_backlog=512 00:16:11.271 verify_state_save=0 00:16:11.271 do_verify=1 00:16:11.271 verify=crc32c-intel 00:16:11.271 [job0] 00:16:11.271 filename=/dev/nvme0n1 00:16:11.271 Could not set queue depth (nvme0n1) 00:16:11.529 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:11.529 fio-3.35 00:16:11.529 Starting 1 thread 00:16:12.461 13:34:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:12.461 13:34:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:12.719 13:34:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:16:12.719 13:34:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:16:12.719 13:34:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:16:12.719 13:34:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:12.719 13:34:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:12.719 13:34:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:12.719 13:34:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:16:12.719 13:34:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:16:12.719 13:34:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:16:12.719 13:34:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:12.719 13:34:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:12.719 13:34:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:12.719 13:34:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:12.977 13:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:13.235 13:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:16:13.235 13:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:16:13.235 13:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:16:13.235 13:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:13.235 13:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:13.235 13:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:13.235 13:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:16:13.235 13:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:16:13.235 13:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:16:13.235 13:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:13.235 13:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:13.235 13:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:13.235 13:34:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 80197 00:16:18.501 00:16:18.501 job0: (groupid=0, jobs=1): err= 0: pid=80218: Wed May 15 13:34:30 2024 00:16:18.501 read: IOPS=11.1k, BW=43.4MiB/s (45.5MB/s)(261MiB/6006msec) 00:16:18.501 slat (usec): min=4, max=5973, avg=54.10, stdev=208.34 00:16:18.501 clat (usec): min=1688, max=15071, avg=7864.16, stdev=1369.26 00:16:18.501 lat (usec): min=1700, max=15085, avg=7918.26, stdev=1372.77 00:16:18.501 clat percentiles (usec): 00:16:18.501 | 1.00th=[ 4293], 5.00th=[ 6063], 10.00th=[ 6718], 20.00th=[ 7177], 00:16:18.501 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7701], 60.00th=[ 7832], 00:16:18.501 | 70.00th=[ 8094], 80.00th=[ 8455], 90.00th=[ 9110], 95.00th=[11076], 00:16:18.501 | 99.00th=[12256], 99.50th=[12387], 99.90th=[13960], 99.95th=[14353], 00:16:18.501 | 99.99th=[14615] 00:16:18.501 bw ( KiB/s): min=11656, max=27936, per=52.06%, avg=23149.33, stdev=4871.31, samples=12 00:16:18.501 iops : min= 2914, max= 6984, avg=5787.33, stdev=1217.83, samples=12 00:16:18.501 write: IOPS=6348, BW=24.8MiB/s (26.0MB/s)(136MiB/5470msec); 0 zone resets 00:16:18.501 slat (usec): min=6, max=3958, avg=58.68, stdev=153.08 00:16:18.501 clat (usec): min=1127, max=14535, avg=6828.51, stdev=1243.09 00:16:18.501 lat (usec): min=1167, max=14554, avg=6887.19, stdev=1246.11 00:16:18.501 clat percentiles (usec): 00:16:18.501 | 1.00th=[ 3294], 5.00th=[ 4113], 10.00th=[ 4948], 20.00th=[ 6325], 00:16:18.501 | 30.00th=[ 6652], 40.00th=[ 6849], 50.00th=[ 7046], 60.00th=[ 7177], 00:16:18.501 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 7898], 95.00th=[ 8160], 00:16:18.501 | 99.00th=[10552], 99.50th=[11076], 99.90th=[12256], 99.95th=[12518], 00:16:18.501 | 99.99th=[13173] 00:16:18.501 bw ( KiB/s): min=12168, max=27192, per=91.01%, avg=23110.00, stdev=4537.93, samples=12 00:16:18.501 iops : min= 3042, max= 6798, avg=5777.50, stdev=1134.48, samples=12 00:16:18.501 lat (msec) : 2=0.02%, 4=1.92%, 10=92.37%, 20=5.69% 00:16:18.501 cpu : usr=5.40%, sys=21.57%, ctx=5869, majf=0, minf=78 00:16:18.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:16:18.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:18.501 issued rwts: total=66766,34725,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:18.501 00:16:18.501 Run status group 0 (all jobs): 00:16:18.501 READ: bw=43.4MiB/s (45.5MB/s), 43.4MiB/s-43.4MiB/s (45.5MB/s-45.5MB/s), io=261MiB (273MB), run=6006-6006msec 00:16:18.501 WRITE: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=136MiB (142MB), run=5470-5470msec 00:16:18.501 00:16:18.501 Disk stats (read/write): 00:16:18.501 nvme0n1: ios=65809/34023, merge=0/0, ticks=495934/218389, in_queue=714323, util=98.63% 00:16:18.501 13:34:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:18.501 13:34:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:18.501 13:34:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:16:18.501 13:34:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:16:18.501 13:34:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:16:18.501 13:34:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:18.501 13:34:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:18.501 13:34:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:16:18.501 13:34:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:16:18.501 13:34:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:16:18.501 13:34:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:16:18.501 13:34:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:18.501 13:34:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:18.501 13:34:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:16:18.501 13:34:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:16:18.501 13:34:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=80302 00:16:18.501 13:34:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:16:18.501 13:34:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:16:18.501 [global] 00:16:18.501 thread=1 00:16:18.501 invalidate=1 00:16:18.501 rw=randrw 00:16:18.501 time_based=1 00:16:18.501 runtime=6 00:16:18.501 ioengine=libaio 00:16:18.501 direct=1 00:16:18.501 bs=4096 00:16:18.501 iodepth=128 00:16:18.501 norandommap=0 00:16:18.501 numjobs=1 00:16:18.501 00:16:18.501 verify_dump=1 00:16:18.501 verify_backlog=512 00:16:18.501 verify_state_save=0 00:16:18.501 do_verify=1 00:16:18.501 verify=crc32c-intel 00:16:18.501 [job0] 00:16:18.501 filename=/dev/nvme0n1 00:16:18.501 Could not set queue depth (nvme0n1) 00:16:18.501 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:18.501 fio-3.35 00:16:18.501 Starting 1 thread 00:16:19.433 13:34:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:19.433 13:34:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:19.691 13:34:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:16:19.691 13:34:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:16:19.691 13:34:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:16:19.691 13:34:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:19.691 13:34:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:19.691 13:34:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:19.691 13:34:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:16:19.691 13:34:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:16:19.691 13:34:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:16:19.691 13:34:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:19.691 13:34:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:19.691 13:34:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:19.691 13:34:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:20.256 13:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:16:20.512 13:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:16:20.512 13:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:16:20.512 13:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:16:20.512 13:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:16:20.512 13:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:16:20.512 13:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:16:20.512 13:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:16:20.512 13:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:16:20.512 13:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:16:20.512 13:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:16:20.512 13:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:16:20.512 13:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:16:20.512 13:34:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 80302 00:16:24.780 00:16:24.780 job0: (groupid=0, jobs=1): err= 0: pid=80324: Wed May 15 13:34:37 2024 00:16:24.780 read: IOPS=12.5k, BW=49.0MiB/s (51.4MB/s)(294MiB/6002msec) 00:16:24.780 slat (usec): min=3, max=9548, avg=40.98, stdev=172.28 00:16:24.780 clat (usec): min=1354, max=17139, avg=7075.27, stdev=1542.48 00:16:24.780 lat (usec): min=1388, max=17168, avg=7116.25, stdev=1553.60 00:16:24.780 clat percentiles (usec): 00:16:24.780 | 1.00th=[ 3064], 5.00th=[ 4359], 10.00th=[ 5080], 20.00th=[ 5932], 00:16:24.780 | 30.00th=[ 6587], 40.00th=[ 6980], 50.00th=[ 7242], 60.00th=[ 7439], 00:16:24.780 | 70.00th=[ 7701], 80.00th=[ 7963], 90.00th=[ 8455], 95.00th=[ 9765], 00:16:24.780 | 99.00th=[11600], 99.50th=[11994], 99.90th=[12911], 99.95th=[13829], 00:16:24.780 | 99.99th=[13829] 00:16:24.780 bw ( KiB/s): min= 9872, max=38176, per=52.76%, avg=26463.27, stdev=8709.18, samples=11 00:16:24.780 iops : min= 2468, max= 9544, avg=6615.82, stdev=2177.30, samples=11 00:16:24.780 write: IOPS=7379, BW=28.8MiB/s (30.2MB/s)(149MiB/5167msec); 0 zone resets 00:16:24.780 slat (usec): min=6, max=2913, avg=47.05, stdev=128.27 00:16:24.780 clat (usec): min=1437, max=13055, avg=6027.10, stdev=1452.41 00:16:24.780 lat (usec): min=1457, max=13722, avg=6074.15, stdev=1465.13 00:16:24.780 clat percentiles (usec): 00:16:24.780 | 1.00th=[ 2769], 5.00th=[ 3490], 10.00th=[ 3884], 20.00th=[ 4490], 00:16:24.780 | 30.00th=[ 5276], 40.00th=[ 6063], 50.00th=[ 6456], 60.00th=[ 6718], 00:16:24.780 | 70.00th=[ 6915], 80.00th=[ 7177], 90.00th=[ 7504], 95.00th=[ 7767], 00:16:24.780 | 99.00th=[ 9503], 99.50th=[10552], 99.90th=[11731], 99.95th=[12256], 00:16:24.780 | 99.99th=[13042] 00:16:24.780 bw ( KiB/s): min=10200, max=37384, per=89.52%, avg=26426.91, stdev=8495.52, samples=11 00:16:24.780 iops : min= 2550, max= 9346, avg=6606.73, stdev=2123.88, samples=11 00:16:24.780 lat (msec) : 2=0.13%, 4=6.15%, 10=90.37%, 20=3.35% 00:16:24.780 cpu : usr=5.56%, sys=21.51%, ctx=6522, majf=0, minf=114 00:16:24.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:16:24.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:24.780 issued rwts: total=75268,38132,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:24.780 00:16:24.780 Run status group 0 (all jobs): 00:16:24.780 READ: bw=49.0MiB/s (51.4MB/s), 49.0MiB/s-49.0MiB/s (51.4MB/s-51.4MB/s), io=294MiB (308MB), run=6002-6002msec 00:16:24.780 WRITE: bw=28.8MiB/s (30.2MB/s), 28.8MiB/s-28.8MiB/s (30.2MB/s-30.2MB/s), io=149MiB (156MB), run=5167-5167msec 00:16:24.780 00:16:24.780 Disk stats (read/write): 00:16:24.780 nvme0n1: ios=73772/38132, merge=0/0, ticks=501143/216499, in_queue=717642, util=98.55% 00:16:24.780 13:34:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:24.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:24.780 13:34:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:24.780 13:34:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1215 -- # local i=0 00:16:24.780 13:34:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:24.780 13:34:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:24.780 13:34:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:24.780 13:34:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:24.780 13:34:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # return 0 00:16:24.780 13:34:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:24.780 13:34:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:16:24.780 13:34:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:16:24.780 13:34:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:16:24.780 13:34:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:16:24.780 13:34:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:24.780 13:34:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:25.038 13:34:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:25.038 13:34:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:25.038 13:34:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:25.038 13:34:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:25.038 rmmod nvme_tcp 00:16:25.038 rmmod nvme_fabrics 00:16:25.038 13:34:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:25.038 13:34:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:25.038 13:34:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:25.039 13:34:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 80119 ']' 00:16:25.039 13:34:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 80119 00:16:25.039 13:34:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@946 -- # '[' -z 80119 ']' 00:16:25.039 13:34:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@950 -- # kill -0 80119 00:16:25.039 13:34:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # uname 00:16:25.039 13:34:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:25.039 13:34:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80119 00:16:25.039 13:34:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:25.039 13:34:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:25.039 13:34:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80119' 00:16:25.039 killing process with pid 80119 00:16:25.039 13:34:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@965 -- # kill 80119 00:16:25.039 [2024-05-15 13:34:37.968265] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:25.039 13:34:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@970 -- # wait 80119 00:16:25.297 13:34:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:25.297 13:34:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:25.297 13:34:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:25.297 13:34:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:25.297 13:34:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:25.297 13:34:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.297 13:34:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.297 13:34:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.297 13:34:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:25.297 00:16:25.297 real 0m18.552s 00:16:25.297 user 1m7.589s 00:16:25.297 sys 0m11.047s 00:16:25.297 13:34:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:25.297 13:34:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:25.297 ************************************ 00:16:25.297 END TEST nvmf_target_multipath 00:16:25.297 ************************************ 00:16:25.297 13:34:38 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:25.297 13:34:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:25.297 13:34:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:25.297 13:34:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:25.297 ************************************ 00:16:25.297 START TEST nvmf_zcopy 00:16:25.297 ************************************ 00:16:25.297 13:34:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:25.297 * Looking for test storage... 00:16:25.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:25.297 13:34:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:25.297 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:25.297 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.297 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.297 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.297 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.297 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.297 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.297 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.297 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.297 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.297 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:25.555 Cannot find device "nvmf_tgt_br" 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:25.555 Cannot find device "nvmf_tgt_br2" 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:25.555 Cannot find device "nvmf_tgt_br" 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:25.555 Cannot find device "nvmf_tgt_br2" 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:25.555 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:25.555 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:25.555 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:25.813 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:25.813 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:25.813 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:25.813 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:25.813 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:25.813 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:25.813 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:25.813 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:25.813 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:25.813 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:25.813 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:25.813 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:25.813 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:25.813 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:25.813 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:25.813 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:25.813 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:25.813 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:25.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:16:25.813 00:16:25.813 --- 10.0.0.2 ping statistics --- 00:16:25.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.813 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:16:25.813 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:25.813 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:25.813 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:16:25.813 00:16:25.814 --- 10.0.0.3 ping statistics --- 00:16:25.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.814 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:25.814 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:25.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:16:25.814 00:16:25.814 --- 10.0.0.1 ping statistics --- 00:16:25.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.814 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:16:25.814 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.814 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:16:25.814 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:25.814 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.814 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:25.814 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:25.814 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.814 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:25.814 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:25.814 13:34:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:25.814 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:25.814 13:34:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:25.814 13:34:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:25.814 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=80572 00:16:25.814 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 80572 00:16:25.814 13:34:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 80572 ']' 00:16:25.814 13:34:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.814 13:34:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:25.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.814 13:34:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.814 13:34:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:25.814 13:34:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:25.814 13:34:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:25.814 [2024-05-15 13:34:38.854820] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:16:25.814 [2024-05-15 13:34:38.854898] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.123 [2024-05-15 13:34:38.974429] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:26.123 [2024-05-15 13:34:38.991211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.123 [2024-05-15 13:34:39.041039] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.123 [2024-05-15 13:34:39.041096] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.123 [2024-05-15 13:34:39.041106] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:26.123 [2024-05-15 13:34:39.041114] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:26.123 [2024-05-15 13:34:39.041122] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.123 [2024-05-15 13:34:39.041151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.691 13:34:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:26.691 13:34:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:16:26.691 13:34:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:26.691 13:34:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:26.691 13:34:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:26.950 [2024-05-15 13:34:39.826045] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:26.950 [2024-05-15 13:34:39.841980] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:26.950 [2024-05-15 13:34:39.842211] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:26.950 malloc0 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:26.950 { 00:16:26.950 "params": { 00:16:26.950 "name": "Nvme$subsystem", 00:16:26.950 "trtype": "$TEST_TRANSPORT", 00:16:26.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:26.950 "adrfam": "ipv4", 00:16:26.950 "trsvcid": "$NVMF_PORT", 00:16:26.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:26.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:26.950 "hdgst": ${hdgst:-false}, 00:16:26.950 "ddgst": ${ddgst:-false} 00:16:26.950 }, 00:16:26.950 "method": "bdev_nvme_attach_controller" 00:16:26.950 } 00:16:26.950 EOF 00:16:26.950 )") 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:26.950 13:34:39 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:26.950 "params": { 00:16:26.950 "name": "Nvme1", 00:16:26.950 "trtype": "tcp", 00:16:26.950 "traddr": "10.0.0.2", 00:16:26.950 "adrfam": "ipv4", 00:16:26.950 "trsvcid": "4420", 00:16:26.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.950 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:26.950 "hdgst": false, 00:16:26.950 "ddgst": false 00:16:26.950 }, 00:16:26.950 "method": "bdev_nvme_attach_controller" 00:16:26.950 }' 00:16:26.950 [2024-05-15 13:34:39.941863] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:16:26.950 [2024-05-15 13:34:39.941975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80605 ] 00:16:27.208 [2024-05-15 13:34:40.071790] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:27.208 [2024-05-15 13:34:40.090660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.208 [2024-05-15 13:34:40.149749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.465 Running I/O for 10 seconds... 00:16:37.449 00:16:37.449 Latency(us) 00:16:37.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.449 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:37.449 Verification LBA range: start 0x0 length 0x1000 00:16:37.449 Nvme1n1 : 10.01 6817.80 53.26 0.00 0.00 18719.07 1833.45 32705.58 00:16:37.449 =================================================================================================================== 00:16:37.449 Total : 6817.80 53.26 0.00 0.00 18719.07 1833.45 32705.58 00:16:37.449 13:34:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=80716 00:16:37.449 13:34:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:37.449 13:34:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:37.449 13:34:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:37.449 13:34:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:37.449 13:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:37.449 13:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:37.449 13:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:37.449 13:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:37.449 { 00:16:37.449 "params": { 00:16:37.449 "name": "Nvme$subsystem", 00:16:37.449 "trtype": "$TEST_TRANSPORT", 00:16:37.449 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:37.450 "adrfam": "ipv4", 00:16:37.450 "trsvcid": "$NVMF_PORT", 00:16:37.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:37.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:37.450 "hdgst": ${hdgst:-false}, 00:16:37.450 "ddgst": ${ddgst:-false} 00:16:37.450 }, 00:16:37.450 "method": "bdev_nvme_attach_controller" 00:16:37.450 } 00:16:37.450 EOF 00:16:37.450 )") 00:16:37.450 13:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:37.450 [2024-05-15 13:34:50.516204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.450 [2024-05-15 13:34:50.516257] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.450 13:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:37.450 13:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:37.450 13:34:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:37.450 "params": { 00:16:37.450 "name": "Nvme1", 00:16:37.450 "trtype": "tcp", 00:16:37.450 "traddr": "10.0.0.2", 00:16:37.450 "adrfam": "ipv4", 00:16:37.450 "trsvcid": "4420", 00:16:37.450 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:37.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:37.450 "hdgst": false, 00:16:37.450 "ddgst": false 00:16:37.450 }, 00:16:37.450 "method": "bdev_nvme_attach_controller" 00:16:37.450 }' 00:16:37.450 [2024-05-15 13:34:50.528170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.450 [2024-05-15 13:34:50.528206] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.450 [2024-05-15 13:34:50.540152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.450 [2024-05-15 13:34:50.540182] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.706 [2024-05-15 13:34:50.548143] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.706 [2024-05-15 13:34:50.548169] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.706 [2024-05-15 13:34:50.556146] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.706 [2024-05-15 13:34:50.556172] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.706 [2024-05-15 13:34:50.564147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.706 [2024-05-15 13:34:50.564181] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.706 [2024-05-15 13:34:50.564709] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:16:37.706 [2024-05-15 13:34:50.565183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80716 ] 00:16:37.706 [2024-05-15 13:34:50.572169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.706 [2024-05-15 13:34:50.572202] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.706 [2024-05-15 13:34:50.584157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.706 [2024-05-15 13:34:50.584185] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.706 [2024-05-15 13:34:50.592150] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.706 [2024-05-15 13:34:50.592175] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.706 [2024-05-15 13:34:50.600159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.706 [2024-05-15 13:34:50.600189] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.706 [2024-05-15 13:34:50.608154] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.706 [2024-05-15 13:34:50.608183] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.706 [2024-05-15 13:34:50.616167] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.706 [2024-05-15 13:34:50.616198] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.706 [2024-05-15 13:34:50.628185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.706 [2024-05-15 13:34:50.628227] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.706 [2024-05-15 13:34:50.636181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.706 [2024-05-15 13:34:50.636215] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.706 [2024-05-15 13:34:50.644178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.706 [2024-05-15 13:34:50.644209] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.706 [2024-05-15 13:34:50.652175] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.706 [2024-05-15 13:34:50.652206] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.706 [2024-05-15 13:34:50.660176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.707 [2024-05-15 13:34:50.660208] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.707 [2024-05-15 13:34:50.668177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.707 [2024-05-15 13:34:50.668209] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.707 [2024-05-15 13:34:50.676198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.707 [2024-05-15 13:34:50.676248] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.707 [2024-05-15 13:34:50.684181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.707 [2024-05-15 13:34:50.684211] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.707 [2024-05-15 13:34:50.692177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.707 [2024-05-15 13:34:50.692205] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.707 [2024-05-15 13:34:50.696679] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:37.707 [2024-05-15 13:34:50.700180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.707 [2024-05-15 13:34:50.700209] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.707 [2024-05-15 13:34:50.708181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.707 [2024-05-15 13:34:50.708209] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.707 [2024-05-15 13:34:50.716196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.707 [2024-05-15 13:34:50.716228] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.707 [2024-05-15 13:34:50.716720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.707 [2024-05-15 13:34:50.724199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.707 [2024-05-15 13:34:50.724227] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.707 [2024-05-15 13:34:50.736210] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.707 [2024-05-15 13:34:50.736262] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.707 [2024-05-15 13:34:50.744206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.707 [2024-05-15 13:34:50.744246] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.707 [2024-05-15 13:34:50.756199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.707 [2024-05-15 13:34:50.756234] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.707 [2024-05-15 13:34:50.764198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.707 [2024-05-15 13:34:50.764227] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.707 [2024-05-15 13:34:50.774254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.707 [2024-05-15 13:34:50.776217] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.707 [2024-05-15 13:34:50.776255] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.707 [2024-05-15 13:34:50.784209] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.707 [2024-05-15 13:34:50.784246] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.707 [2024-05-15 13:34:50.792226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.707 [2024-05-15 13:34:50.792272] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.707 [2024-05-15 13:34:50.804226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.707 [2024-05-15 13:34:50.804274] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.964 [2024-05-15 13:34:50.816237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.964 [2024-05-15 13:34:50.816282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.964 [2024-05-15 13:34:50.824227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.964 [2024-05-15 13:34:50.824269] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.965 [2024-05-15 13:34:50.836225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.965 [2024-05-15 13:34:50.836264] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.965 [2024-05-15 13:34:50.844225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.965 [2024-05-15 13:34:50.844264] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.965 [2024-05-15 13:34:50.856243] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.965 [2024-05-15 13:34:50.856290] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.965 [2024-05-15 13:34:50.864263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.965 [2024-05-15 13:34:50.864312] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.965 [2024-05-15 13:34:50.872259] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.965 [2024-05-15 13:34:50.872295] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.965 [2024-05-15 13:34:50.880288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.965 [2024-05-15 13:34:50.880331] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.965 [2024-05-15 13:34:50.892287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.965 [2024-05-15 13:34:50.892325] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.965 [2024-05-15 13:34:50.904291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.965 [2024-05-15 13:34:50.904328] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.965 [2024-05-15 13:34:50.916299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.965 [2024-05-15 13:34:50.916333] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.965 [2024-05-15 13:34:50.924324] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.965 [2024-05-15 13:34:50.924362] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.965 [2024-05-15 13:34:50.936305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.965 [2024-05-15 13:34:50.936338] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.965 Running I/O for 5 seconds... 00:16:37.965 [2024-05-15 13:34:50.947369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.965 [2024-05-15 13:34:50.947405] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.965 [2024-05-15 13:34:50.962651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.965 [2024-05-15 13:34:50.962689] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.965 [2024-05-15 13:34:50.978585] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.965 [2024-05-15 13:34:50.978639] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.965 [2024-05-15 13:34:50.992484] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.965 [2024-05-15 13:34:50.992525] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.965 [2024-05-15 13:34:51.006863] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.965 [2024-05-15 13:34:51.006909] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.965 [2024-05-15 13:34:51.016959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.965 [2024-05-15 13:34:51.017001] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.965 [2024-05-15 13:34:51.027557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.965 [2024-05-15 13:34:51.027616] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.965 [2024-05-15 13:34:51.038726] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.965 [2024-05-15 13:34:51.038768] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.965 [2024-05-15 13:34:51.049608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.965 [2024-05-15 13:34:51.049649] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.222 [2024-05-15 13:34:51.065071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.222 [2024-05-15 13:34:51.065113] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.222 [2024-05-15 13:34:51.074613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.222 [2024-05-15 13:34:51.074653] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.222 [2024-05-15 13:34:51.087742] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.222 [2024-05-15 13:34:51.087781] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.222 [2024-05-15 13:34:51.101423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.222 [2024-05-15 13:34:51.101460] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.222 [2024-05-15 13:34:51.115409] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.222 [2024-05-15 13:34:51.115447] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.222 [2024-05-15 13:34:51.129705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.222 [2024-05-15 13:34:51.129748] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.222 [2024-05-15 13:34:51.144607] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.222 [2024-05-15 13:34:51.144650] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.222 [2024-05-15 13:34:51.158395] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.222 [2024-05-15 13:34:51.158441] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.223 [2024-05-15 13:34:51.172477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.223 [2024-05-15 13:34:51.172516] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.223 [2024-05-15 13:34:51.186993] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.223 [2024-05-15 13:34:51.187036] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.223 [2024-05-15 13:34:51.196550] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.223 [2024-05-15 13:34:51.196588] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.223 [2024-05-15 13:34:51.207053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.223 [2024-05-15 13:34:51.207090] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.223 [2024-05-15 13:34:51.222351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.223 [2024-05-15 13:34:51.222388] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.223 [2024-05-15 13:34:51.236377] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.223 [2024-05-15 13:34:51.236415] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.223 [2024-05-15 13:34:51.251125] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.223 [2024-05-15 13:34:51.251163] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.223 [2024-05-15 13:34:51.264801] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.223 [2024-05-15 13:34:51.264841] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.223 [2024-05-15 13:34:51.279471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.223 [2024-05-15 13:34:51.279515] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.223 [2024-05-15 13:34:51.293929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.223 [2024-05-15 13:34:51.293969] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.223 [2024-05-15 13:34:51.303405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.223 [2024-05-15 13:34:51.303444] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.223 [2024-05-15 13:34:51.316571] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.223 [2024-05-15 13:34:51.316609] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.480 [2024-05-15 13:34:51.329661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.480 [2024-05-15 13:34:51.329716] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.480 [2024-05-15 13:34:51.344406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.480 [2024-05-15 13:34:51.344446] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.480 [2024-05-15 13:34:51.355382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.480 [2024-05-15 13:34:51.355421] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.480 [2024-05-15 13:34:51.369988] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.480 [2024-05-15 13:34:51.370027] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.480 [2024-05-15 13:34:51.379410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.480 [2024-05-15 13:34:51.379446] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.480 [2024-05-15 13:34:51.389854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.480 [2024-05-15 13:34:51.389892] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.480 [2024-05-15 13:34:51.400191] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.480 [2024-05-15 13:34:51.400227] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.480 [2024-05-15 13:34:51.411839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.480 [2024-05-15 13:34:51.411876] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.480 [2024-05-15 13:34:51.421105] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.480 [2024-05-15 13:34:51.421145] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.480 [2024-05-15 13:34:51.430746] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.480 [2024-05-15 13:34:51.430783] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.480 [2024-05-15 13:34:51.440191] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.480 [2024-05-15 13:34:51.440228] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.480 [2024-05-15 13:34:51.453624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.480 [2024-05-15 13:34:51.453663] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.480 [2024-05-15 13:34:51.462820] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.480 [2024-05-15 13:34:51.462858] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.480 [2024-05-15 13:34:51.473559] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.480 [2024-05-15 13:34:51.473608] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.480 [2024-05-15 13:34:51.487707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.480 [2024-05-15 13:34:51.487748] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.480 [2024-05-15 13:34:51.501745] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.480 [2024-05-15 13:34:51.501788] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.480 [2024-05-15 13:34:51.517317] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.480 [2024-05-15 13:34:51.517356] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.480 [2024-05-15 13:34:51.526774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.480 [2024-05-15 13:34:51.526814] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.480 [2024-05-15 13:34:51.536875] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.480 [2024-05-15 13:34:51.536913] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.480 [2024-05-15 13:34:51.550107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.480 [2024-05-15 13:34:51.550150] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.480 [2024-05-15 13:34:51.564599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.480 [2024-05-15 13:34:51.564642] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.803 [2024-05-15 13:34:51.578628] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.803 [2024-05-15 13:34:51.578678] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.803 [2024-05-15 13:34:51.593981] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.803 [2024-05-15 13:34:51.594025] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.803 [2024-05-15 13:34:51.609890] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.803 [2024-05-15 13:34:51.609934] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.803 [2024-05-15 13:34:51.623859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.803 [2024-05-15 13:34:51.623900] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.803 [2024-05-15 13:34:51.638931] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.803 [2024-05-15 13:34:51.638973] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.803 [2024-05-15 13:34:51.655744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.803 [2024-05-15 13:34:51.655793] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.803 [2024-05-15 13:34:51.672132] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.803 [2024-05-15 13:34:51.672189] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.803 [2024-05-15 13:34:51.681931] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.803 [2024-05-15 13:34:51.681996] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.803 [2024-05-15 13:34:51.696561] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.803 [2024-05-15 13:34:51.696611] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.803 [2024-05-15 13:34:51.710756] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.803 [2024-05-15 13:34:51.710839] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.803 [2024-05-15 13:34:51.724779] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.803 [2024-05-15 13:34:51.724833] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.803 [2024-05-15 13:34:51.739226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.803 [2024-05-15 13:34:51.739286] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.803 [2024-05-15 13:34:51.753962] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.803 [2024-05-15 13:34:51.754026] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.803 [2024-05-15 13:34:51.768228] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.803 [2024-05-15 13:34:51.768310] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.803 [2024-05-15 13:34:51.778144] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.803 [2024-05-15 13:34:51.778188] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.803 [2024-05-15 13:34:51.788561] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.803 [2024-05-15 13:34:51.788600] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.803 [2024-05-15 13:34:51.802975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.803 [2024-05-15 13:34:51.803015] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.803 [2024-05-15 13:34:51.812439] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.803 [2024-05-15 13:34:51.812475] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.803 [2024-05-15 13:34:51.826230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.803 [2024-05-15 13:34:51.826280] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.803 [2024-05-15 13:34:51.836764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.803 [2024-05-15 13:34:51.836805] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.803 [2024-05-15 13:34:51.849361] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.803 [2024-05-15 13:34:51.849421] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.803 [2024-05-15 13:34:51.863928] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.803 [2024-05-15 13:34:51.863968] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.803 [2024-05-15 13:34:51.878399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.803 [2024-05-15 13:34:51.878436] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.063 [2024-05-15 13:34:51.891930] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.063 [2024-05-15 13:34:51.891972] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.063 [2024-05-15 13:34:51.906440] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.063 [2024-05-15 13:34:51.906498] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.063 [2024-05-15 13:34:51.916202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.063 [2024-05-15 13:34:51.916253] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.063 [2024-05-15 13:34:51.926758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.063 [2024-05-15 13:34:51.926796] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.063 [2024-05-15 13:34:51.937215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.063 [2024-05-15 13:34:51.937262] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.063 [2024-05-15 13:34:51.947443] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.063 [2024-05-15 13:34:51.947479] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.063 [2024-05-15 13:34:51.959016] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.063 [2024-05-15 13:34:51.959053] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.063 [2024-05-15 13:34:51.968176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.063 [2024-05-15 13:34:51.968212] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.063 [2024-05-15 13:34:51.978801] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.063 [2024-05-15 13:34:51.978852] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.063 [2024-05-15 13:34:51.993157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.063 [2024-05-15 13:34:51.993207] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.063 [2024-05-15 13:34:52.007586] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.063 [2024-05-15 13:34:52.007642] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.063 [2024-05-15 13:34:52.022699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.063 [2024-05-15 13:34:52.022748] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.063 [2024-05-15 13:34:52.034293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.063 [2024-05-15 13:34:52.034357] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.063 [2024-05-15 13:34:52.043706] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.063 [2024-05-15 13:34:52.043755] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.063 [2024-05-15 13:34:52.054434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.063 [2024-05-15 13:34:52.054472] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.063 [2024-05-15 13:34:52.064637] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.063 [2024-05-15 13:34:52.064674] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.063 [2024-05-15 13:34:52.074980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.063 [2024-05-15 13:34:52.075019] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.063 [2024-05-15 13:34:52.091736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.063 [2024-05-15 13:34:52.091798] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.063 [2024-05-15 13:34:52.101712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.063 [2024-05-15 13:34:52.101768] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.063 [2024-05-15 13:34:52.116552] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.063 [2024-05-15 13:34:52.116622] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.063 [2024-05-15 13:34:52.130737] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.063 [2024-05-15 13:34:52.130801] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.063 [2024-05-15 13:34:52.145262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.063 [2024-05-15 13:34:52.145314] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.063 [2024-05-15 13:34:52.154813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.063 [2024-05-15 13:34:52.154851] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.321 [2024-05-15 13:34:52.167768] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.321 [2024-05-15 13:34:52.167806] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.321 [2024-05-15 13:34:52.181305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.321 [2024-05-15 13:34:52.181345] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.321 [2024-05-15 13:34:52.190790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.321 [2024-05-15 13:34:52.190830] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.321 [2024-05-15 13:34:52.201871] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.321 [2024-05-15 13:34:52.201911] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.321 [2024-05-15 13:34:52.212349] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.321 [2024-05-15 13:34:52.212386] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.321 [2024-05-15 13:34:52.225481] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.321 [2024-05-15 13:34:52.225518] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.321 [2024-05-15 13:34:52.234948] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.321 [2024-05-15 13:34:52.234985] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.321 [2024-05-15 13:34:52.247870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.321 [2024-05-15 13:34:52.247906] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.321 [2024-05-15 13:34:52.262394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.321 [2024-05-15 13:34:52.262435] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.321 [2024-05-15 13:34:52.276857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.321 [2024-05-15 13:34:52.276897] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.321 [2024-05-15 13:34:52.288153] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.321 [2024-05-15 13:34:52.288196] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.321 [2024-05-15 13:34:52.301428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.321 [2024-05-15 13:34:52.301472] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.321 [2024-05-15 13:34:52.313659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.321 [2024-05-15 13:34:52.313701] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.321 [2024-05-15 13:34:52.325553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.321 [2024-05-15 13:34:52.325595] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.321 [2024-05-15 13:34:52.339626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.321 [2024-05-15 13:34:52.339671] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.321 [2024-05-15 13:34:52.353714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.321 [2024-05-15 13:34:52.353757] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.321 [2024-05-15 13:34:52.368467] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.321 [2024-05-15 13:34:52.368536] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.321 [2024-05-15 13:34:52.382232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.321 [2024-05-15 13:34:52.382299] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.321 [2024-05-15 13:34:52.397568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.321 [2024-05-15 13:34:52.397609] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.321 [2024-05-15 13:34:52.412248] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.321 [2024-05-15 13:34:52.412290] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.579 [2024-05-15 13:34:52.426490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.579 [2024-05-15 13:34:52.426531] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.579 [2024-05-15 13:34:52.436959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.579 [2024-05-15 13:34:52.437006] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.579 [2024-05-15 13:34:52.447905] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.579 [2024-05-15 13:34:52.447949] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.579 [2024-05-15 13:34:52.463753] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.579 [2024-05-15 13:34:52.463794] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.579 [2024-05-15 13:34:52.477285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.579 [2024-05-15 13:34:52.477329] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.579 [2024-05-15 13:34:52.491513] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.579 [2024-05-15 13:34:52.491556] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.579 [2024-05-15 13:34:52.506170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.579 [2024-05-15 13:34:52.506230] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.579 [2024-05-15 13:34:52.521204] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.579 [2024-05-15 13:34:52.521278] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.579 [2024-05-15 13:34:52.532257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.579 [2024-05-15 13:34:52.532311] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.579 [2024-05-15 13:34:52.542509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.579 [2024-05-15 13:34:52.542562] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.579 [2024-05-15 13:34:52.557066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.579 [2024-05-15 13:34:52.557116] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.579 [2024-05-15 13:34:52.571720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.579 [2024-05-15 13:34:52.571763] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.579 [2024-05-15 13:34:52.586463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.579 [2024-05-15 13:34:52.586503] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.579 [2024-05-15 13:34:52.600267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.579 [2024-05-15 13:34:52.600311] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.579 [2024-05-15 13:34:52.613915] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.579 [2024-05-15 13:34:52.613957] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.579 [2024-05-15 13:34:52.628449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.579 [2024-05-15 13:34:52.628489] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.579 [2024-05-15 13:34:52.642916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.579 [2024-05-15 13:34:52.642963] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.579 [2024-05-15 13:34:52.653989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.579 [2024-05-15 13:34:52.654034] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.579 [2024-05-15 13:34:52.662751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.579 [2024-05-15 13:34:52.662793] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.579 [2024-05-15 13:34:52.674370] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.579 [2024-05-15 13:34:52.674416] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.837 [2024-05-15 13:34:52.689065] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.837 [2024-05-15 13:34:52.689115] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.837 [2024-05-15 13:34:52.703415] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.837 [2024-05-15 13:34:52.703462] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.837 [2024-05-15 13:34:52.718192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.837 [2024-05-15 13:34:52.718253] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.837 [2024-05-15 13:34:52.732753] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.837 [2024-05-15 13:34:52.732801] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.837 [2024-05-15 13:34:52.747335] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.837 [2024-05-15 13:34:52.747377] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.837 [2024-05-15 13:34:52.761973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.837 [2024-05-15 13:34:52.762024] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.837 [2024-05-15 13:34:52.775865] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.837 [2024-05-15 13:34:52.775912] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.837 [2024-05-15 13:34:52.790278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.837 [2024-05-15 13:34:52.790320] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.837 [2024-05-15 13:34:52.801735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.837 [2024-05-15 13:34:52.801779] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.837 [2024-05-15 13:34:52.814901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.837 [2024-05-15 13:34:52.814947] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.837 [2024-05-15 13:34:52.830758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.837 [2024-05-15 13:34:52.830824] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.837 [2024-05-15 13:34:52.844635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.837 [2024-05-15 13:34:52.844682] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.837 [2024-05-15 13:34:52.859430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.837 [2024-05-15 13:34:52.859479] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.837 [2024-05-15 13:34:52.870979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.837 [2024-05-15 13:34:52.871033] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.837 [2024-05-15 13:34:52.884490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.837 [2024-05-15 13:34:52.884543] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.837 [2024-05-15 13:34:52.900308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.837 [2024-05-15 13:34:52.900357] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.837 [2024-05-15 13:34:52.911582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.837 [2024-05-15 13:34:52.911632] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:39.837 [2024-05-15 13:34:52.927093] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:39.837 [2024-05-15 13:34:52.927142] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.095 [2024-05-15 13:34:52.939225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.095 [2024-05-15 13:34:52.939303] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.095 [2024-05-15 13:34:52.955499] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.095 [2024-05-15 13:34:52.955546] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.095 [2024-05-15 13:34:52.967045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.095 [2024-05-15 13:34:52.967093] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.095 [2024-05-15 13:34:52.979979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.095 [2024-05-15 13:34:52.980032] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.095 [2024-05-15 13:34:52.995123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.095 [2024-05-15 13:34:52.995180] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.095 [2024-05-15 13:34:53.006793] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.095 [2024-05-15 13:34:53.006843] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.095 [2024-05-15 13:34:53.019509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.095 [2024-05-15 13:34:53.019558] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.095 [2024-05-15 13:34:53.032055] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.095 [2024-05-15 13:34:53.032101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.095 [2024-05-15 13:34:53.044733] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.095 [2024-05-15 13:34:53.044786] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.095 [2024-05-15 13:34:53.059151] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.095 [2024-05-15 13:34:53.059206] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.095 [2024-05-15 13:34:53.073159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.095 [2024-05-15 13:34:53.073205] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.095 [2024-05-15 13:34:53.087573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.095 [2024-05-15 13:34:53.087619] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.095 [2024-05-15 13:34:53.102001] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.095 [2024-05-15 13:34:53.102049] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.095 [2024-05-15 13:34:53.116566] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.095 [2024-05-15 13:34:53.116608] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.095 [2024-05-15 13:34:53.130841] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.095 [2024-05-15 13:34:53.130884] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.095 [2024-05-15 13:34:53.144569] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.095 [2024-05-15 13:34:53.144615] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.095 [2024-05-15 13:34:53.159199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.095 [2024-05-15 13:34:53.159255] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.095 [2024-05-15 13:34:53.170639] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.095 [2024-05-15 13:34:53.170680] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.095 [2024-05-15 13:34:53.186392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.095 [2024-05-15 13:34:53.186435] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.352 [2024-05-15 13:34:53.197867] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.352 [2024-05-15 13:34:53.197908] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.352 [2024-05-15 13:34:53.210835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.352 [2024-05-15 13:34:53.210881] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.352 [2024-05-15 13:34:53.222918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.352 [2024-05-15 13:34:53.222966] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.352 [2024-05-15 13:34:53.238672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.352 [2024-05-15 13:34:53.238713] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.352 [2024-05-15 13:34:53.253222] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.352 [2024-05-15 13:34:53.253282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.352 [2024-05-15 13:34:53.264287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.352 [2024-05-15 13:34:53.264328] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.352 [2024-05-15 13:34:53.272464] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.352 [2024-05-15 13:34:53.272511] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.352 [2024-05-15 13:34:53.283563] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.352 [2024-05-15 13:34:53.283599] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.352 [2024-05-15 13:34:53.295805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.352 [2024-05-15 13:34:53.295847] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.352 [2024-05-15 13:34:53.306607] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.352 [2024-05-15 13:34:53.306664] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.352 [2024-05-15 13:34:53.317550] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.352 [2024-05-15 13:34:53.317612] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.353 [2024-05-15 13:34:53.327807] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.353 [2024-05-15 13:34:53.327866] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.353 [2024-05-15 13:34:53.337046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.353 [2024-05-15 13:34:53.337105] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.353 [2024-05-15 13:34:53.347587] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.353 [2024-05-15 13:34:53.347645] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.353 [2024-05-15 13:34:53.358426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.353 [2024-05-15 13:34:53.358494] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.353 [2024-05-15 13:34:53.368916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.353 [2024-05-15 13:34:53.368973] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.353 [2024-05-15 13:34:53.379231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.353 [2024-05-15 13:34:53.379298] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.353 [2024-05-15 13:34:53.390558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.353 [2024-05-15 13:34:53.390612] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.353 [2024-05-15 13:34:53.399602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.353 [2024-05-15 13:34:53.399645] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.353 [2024-05-15 13:34:53.410080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.353 [2024-05-15 13:34:53.410124] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.353 [2024-05-15 13:34:53.420109] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.353 [2024-05-15 13:34:53.420157] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.353 [2024-05-15 13:34:53.430208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.353 [2024-05-15 13:34:53.430264] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.353 [2024-05-15 13:34:53.440042] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.353 [2024-05-15 13:34:53.440084] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.611 [2024-05-15 13:34:53.452926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.611 [2024-05-15 13:34:53.452990] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.611 [2024-05-15 13:34:53.467821] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.611 [2024-05-15 13:34:53.467869] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.611 [2024-05-15 13:34:53.482011] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.611 [2024-05-15 13:34:53.482052] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.611 [2024-05-15 13:34:53.496422] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.611 [2024-05-15 13:34:53.496463] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.611 [2024-05-15 13:34:53.510640] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.611 [2024-05-15 13:34:53.510680] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.611 [2024-05-15 13:34:53.524500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.611 [2024-05-15 13:34:53.524539] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.611 [2024-05-15 13:34:53.538711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.611 [2024-05-15 13:34:53.538749] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.611 [2024-05-15 13:34:53.552481] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.611 [2024-05-15 13:34:53.552520] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.611 [2024-05-15 13:34:53.567844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.611 [2024-05-15 13:34:53.567882] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.611 [2024-05-15 13:34:53.583723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.611 [2024-05-15 13:34:53.583762] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.611 [2024-05-15 13:34:53.597717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.611 [2024-05-15 13:34:53.597756] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.611 [2024-05-15 13:34:53.606900] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.611 [2024-05-15 13:34:53.606936] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.611 [2024-05-15 13:34:53.617170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.611 [2024-05-15 13:34:53.617209] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.611 [2024-05-15 13:34:53.633484] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.611 [2024-05-15 13:34:53.633523] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.611 [2024-05-15 13:34:53.649704] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.611 [2024-05-15 13:34:53.649751] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.611 [2024-05-15 13:34:53.659540] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.611 [2024-05-15 13:34:53.659577] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.611 [2024-05-15 13:34:53.674964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.611 [2024-05-15 13:34:53.675001] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.611 [2024-05-15 13:34:53.688679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.611 [2024-05-15 13:34:53.688715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.611 [2024-05-15 13:34:53.703514] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.611 [2024-05-15 13:34:53.703553] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.868 [2024-05-15 13:34:53.717766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.868 [2024-05-15 13:34:53.717807] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.868 [2024-05-15 13:34:53.732403] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.868 [2024-05-15 13:34:53.732458] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.868 [2024-05-15 13:34:53.746424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.868 [2024-05-15 13:34:53.746466] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.868 [2024-05-15 13:34:53.761290] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.868 [2024-05-15 13:34:53.761347] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.868 [2024-05-15 13:34:53.775556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.868 [2024-05-15 13:34:53.775602] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.868 [2024-05-15 13:34:53.789508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.868 [2024-05-15 13:34:53.789550] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.868 [2024-05-15 13:34:53.808027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.868 [2024-05-15 13:34:53.808083] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.868 [2024-05-15 13:34:53.818158] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.868 [2024-05-15 13:34:53.818200] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.868 [2024-05-15 13:34:53.826892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.868 [2024-05-15 13:34:53.826935] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.868 [2024-05-15 13:34:53.841922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.868 [2024-05-15 13:34:53.841995] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.868 [2024-05-15 13:34:53.856408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.868 [2024-05-15 13:34:53.856450] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.869 [2024-05-15 13:34:53.871856] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.869 [2024-05-15 13:34:53.871897] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.869 [2024-05-15 13:34:53.888412] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.869 [2024-05-15 13:34:53.888452] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.869 [2024-05-15 13:34:53.904668] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.869 [2024-05-15 13:34:53.904709] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.869 [2024-05-15 13:34:53.921332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.869 [2024-05-15 13:34:53.921409] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.869 [2024-05-15 13:34:53.939373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.869 [2024-05-15 13:34:53.939421] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:40.869 [2024-05-15 13:34:53.954318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:40.869 [2024-05-15 13:34:53.954363] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.131 [2024-05-15 13:34:53.971135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.131 [2024-05-15 13:34:53.971177] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.131 [2024-05-15 13:34:53.987583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.131 [2024-05-15 13:34:53.987621] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.131 [2024-05-15 13:34:54.003674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.131 [2024-05-15 13:34:54.003712] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.131 [2024-05-15 13:34:54.017865] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.131 [2024-05-15 13:34:54.017902] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.131 [2024-05-15 13:34:54.033267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.131 [2024-05-15 13:34:54.033301] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.131 [2024-05-15 13:34:54.049178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.131 [2024-05-15 13:34:54.049214] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.131 [2024-05-15 13:34:54.063606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.131 [2024-05-15 13:34:54.063644] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.131 [2024-05-15 13:34:54.078800] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.131 [2024-05-15 13:34:54.078837] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.131 [2024-05-15 13:34:54.095025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.131 [2024-05-15 13:34:54.095077] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.131 [2024-05-15 13:34:54.106149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.131 [2024-05-15 13:34:54.106198] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.131 [2024-05-15 13:34:54.122051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.131 [2024-05-15 13:34:54.122107] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.131 [2024-05-15 13:34:54.138028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.131 [2024-05-15 13:34:54.138077] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.131 [2024-05-15 13:34:54.149099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.131 [2024-05-15 13:34:54.149145] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.131 [2024-05-15 13:34:54.164699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.131 [2024-05-15 13:34:54.164761] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.131 [2024-05-15 13:34:54.180458] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.131 [2024-05-15 13:34:54.180511] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.131 [2024-05-15 13:34:54.196889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.131 [2024-05-15 13:34:54.196962] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.131 [2024-05-15 13:34:54.213547] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.131 [2024-05-15 13:34:54.213601] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.391 [2024-05-15 13:34:54.230176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.391 [2024-05-15 13:34:54.230221] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.391 [2024-05-15 13:34:54.247057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.391 [2024-05-15 13:34:54.247102] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.391 [2024-05-15 13:34:54.263836] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.391 [2024-05-15 13:34:54.263886] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.391 [2024-05-15 13:34:54.274783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.391 [2024-05-15 13:34:54.274824] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.391 [2024-05-15 13:34:54.290712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.391 [2024-05-15 13:34:54.290753] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.391 [2024-05-15 13:34:54.309393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.391 [2024-05-15 13:34:54.309438] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.391 [2024-05-15 13:34:54.323542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.391 [2024-05-15 13:34:54.323590] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.391 [2024-05-15 13:34:54.339465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.391 [2024-05-15 13:34:54.339509] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.391 [2024-05-15 13:34:54.356008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.391 [2024-05-15 13:34:54.356048] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.391 [2024-05-15 13:34:54.372787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.391 [2024-05-15 13:34:54.372828] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.391 [2024-05-15 13:34:54.388686] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.391 [2024-05-15 13:34:54.388723] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.391 [2024-05-15 13:34:54.400353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.391 [2024-05-15 13:34:54.400392] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.391 [2024-05-15 13:34:54.415357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.391 [2024-05-15 13:34:54.415396] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.391 [2024-05-15 13:34:54.431814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.391 [2024-05-15 13:34:54.431854] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.391 [2024-05-15 13:34:54.442865] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.391 [2024-05-15 13:34:54.442905] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.391 [2024-05-15 13:34:54.459194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.391 [2024-05-15 13:34:54.459246] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.391 [2024-05-15 13:34:54.476036] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.391 [2024-05-15 13:34:54.476082] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.649 [2024-05-15 13:34:54.492987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.649 [2024-05-15 13:34:54.493038] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.649 [2024-05-15 13:34:54.508918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.650 [2024-05-15 13:34:54.508964] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.650 [2024-05-15 13:34:54.520221] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.650 [2024-05-15 13:34:54.520272] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.650 [2024-05-15 13:34:54.535133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.650 [2024-05-15 13:34:54.535174] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.650 [2024-05-15 13:34:54.551562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.650 [2024-05-15 13:34:54.551599] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.650 [2024-05-15 13:34:54.567852] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.650 [2024-05-15 13:34:54.567900] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.650 [2024-05-15 13:34:54.579625] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.650 [2024-05-15 13:34:54.579667] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.650 [2024-05-15 13:34:54.587123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.650 [2024-05-15 13:34:54.587160] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.650 [2024-05-15 13:34:54.601402] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.650 [2024-05-15 13:34:54.601435] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.650 [2024-05-15 13:34:54.616942] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.650 [2024-05-15 13:34:54.616977] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.650 [2024-05-15 13:34:54.632748] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.650 [2024-05-15 13:34:54.632787] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.650 [2024-05-15 13:34:54.646936] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.650 [2024-05-15 13:34:54.646976] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.650 [2024-05-15 13:34:54.661818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.650 [2024-05-15 13:34:54.661865] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.650 [2024-05-15 13:34:54.679003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.650 [2024-05-15 13:34:54.679051] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.650 [2024-05-15 13:34:54.694387] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.650 [2024-05-15 13:34:54.694441] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.650 [2024-05-15 13:34:54.710447] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.650 [2024-05-15 13:34:54.710501] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.650 [2024-05-15 13:34:54.726572] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.650 [2024-05-15 13:34:54.726633] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.650 [2024-05-15 13:34:54.737842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.650 [2024-05-15 13:34:54.737900] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.908 [2024-05-15 13:34:54.753963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.908 [2024-05-15 13:34:54.754016] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.908 [2024-05-15 13:34:54.770977] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.908 [2024-05-15 13:34:54.771032] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.908 [2024-05-15 13:34:54.787480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.908 [2024-05-15 13:34:54.787530] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.908 [2024-05-15 13:34:54.803805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.908 [2024-05-15 13:34:54.803855] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.908 [2024-05-15 13:34:54.820521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.908 [2024-05-15 13:34:54.820574] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.908 [2024-05-15 13:34:54.838056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.908 [2024-05-15 13:34:54.838117] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.908 [2024-05-15 13:34:54.853710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.908 [2024-05-15 13:34:54.853766] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.908 [2024-05-15 13:34:54.869778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.908 [2024-05-15 13:34:54.869834] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.908 [2024-05-15 13:34:54.886247] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.908 [2024-05-15 13:34:54.886305] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.908 [2024-05-15 13:34:54.898333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.908 [2024-05-15 13:34:54.898379] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.908 [2024-05-15 13:34:54.913946] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.908 [2024-05-15 13:34:54.913992] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.908 [2024-05-15 13:34:54.930886] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.908 [2024-05-15 13:34:54.930933] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.908 [2024-05-15 13:34:54.948632] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.908 [2024-05-15 13:34:54.948680] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.908 [2024-05-15 13:34:54.964616] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.908 [2024-05-15 13:34:54.964662] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.908 [2024-05-15 13:34:54.982572] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.908 [2024-05-15 13:34:54.982625] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:41.908 [2024-05-15 13:34:54.998444] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:41.908 [2024-05-15 13:34:54.998490] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.166 [2024-05-15 13:34:55.015316] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.167 [2024-05-15 13:34:55.015357] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.167 [2024-05-15 13:34:55.031651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.167 [2024-05-15 13:34:55.031689] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.167 [2024-05-15 13:34:55.047981] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.167 [2024-05-15 13:34:55.048018] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.167 [2024-05-15 13:34:55.064194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.167 [2024-05-15 13:34:55.064245] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.167 [2024-05-15 13:34:55.081368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.167 [2024-05-15 13:34:55.081410] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.167 [2024-05-15 13:34:55.097576] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.167 [2024-05-15 13:34:55.097617] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.167 [2024-05-15 13:34:55.108701] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.167 [2024-05-15 13:34:55.108743] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.167 [2024-05-15 13:34:55.116542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.167 [2024-05-15 13:34:55.116577] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.167 [2024-05-15 13:34:55.131806] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.167 [2024-05-15 13:34:55.131848] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.167 [2024-05-15 13:34:55.147486] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.167 [2024-05-15 13:34:55.147526] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.167 [2024-05-15 13:34:55.163435] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.167 [2024-05-15 13:34:55.163480] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.167 [2024-05-15 13:34:55.180642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.167 [2024-05-15 13:34:55.180685] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.167 [2024-05-15 13:34:55.196957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.167 [2024-05-15 13:34:55.197006] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.167 [2024-05-15 13:34:55.214135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.167 [2024-05-15 13:34:55.214182] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.167 [2024-05-15 13:34:55.229397] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.167 [2024-05-15 13:34:55.229460] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.167 [2024-05-15 13:34:55.240070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.167 [2024-05-15 13:34:55.240106] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.167 [2024-05-15 13:34:55.255429] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.167 [2024-05-15 13:34:55.255471] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.425 [2024-05-15 13:34:55.272795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.425 [2024-05-15 13:34:55.272837] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.425 [2024-05-15 13:34:55.288886] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.425 [2024-05-15 13:34:55.288924] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.425 [2024-05-15 13:34:55.305961] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.425 [2024-05-15 13:34:55.305999] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.425 [2024-05-15 13:34:55.317183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.425 [2024-05-15 13:34:55.317222] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.425 [2024-05-15 13:34:55.333315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.425 [2024-05-15 13:34:55.333363] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.425 [2024-05-15 13:34:55.350398] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.425 [2024-05-15 13:34:55.350439] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.425 [2024-05-15 13:34:55.361549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.425 [2024-05-15 13:34:55.361591] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.425 [2024-05-15 13:34:55.377213] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.425 [2024-05-15 13:34:55.377264] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.425 [2024-05-15 13:34:55.395043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.425 [2024-05-15 13:34:55.395083] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.425 [2024-05-15 13:34:55.410070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.425 [2024-05-15 13:34:55.410107] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.425 [2024-05-15 13:34:55.425094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.425 [2024-05-15 13:34:55.425133] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.425 [2024-05-15 13:34:55.436729] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.425 [2024-05-15 13:34:55.436768] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.425 [2024-05-15 13:34:55.452127] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.425 [2024-05-15 13:34:55.452164] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.425 [2024-05-15 13:34:55.468578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.425 [2024-05-15 13:34:55.468619] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.425 [2024-05-15 13:34:55.480013] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.425 [2024-05-15 13:34:55.480055] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.425 [2024-05-15 13:34:55.495536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.425 [2024-05-15 13:34:55.495580] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.425 [2024-05-15 13:34:55.513681] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.425 [2024-05-15 13:34:55.513727] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.683 [2024-05-15 13:34:55.528727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.683 [2024-05-15 13:34:55.528769] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.683 [2024-05-15 13:34:55.539952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.683 [2024-05-15 13:34:55.539991] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.683 [2024-05-15 13:34:55.556081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.683 [2024-05-15 13:34:55.556119] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.683 [2024-05-15 13:34:55.572614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.683 [2024-05-15 13:34:55.572653] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.683 [2024-05-15 13:34:55.589893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.683 [2024-05-15 13:34:55.589934] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.684 [2024-05-15 13:34:55.604813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.684 [2024-05-15 13:34:55.604865] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.684 [2024-05-15 13:34:55.621761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.684 [2024-05-15 13:34:55.621814] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.684 [2024-05-15 13:34:55.642381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.684 [2024-05-15 13:34:55.642428] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.684 [2024-05-15 13:34:55.658665] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.684 [2024-05-15 13:34:55.658711] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.684 [2024-05-15 13:34:55.675697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.684 [2024-05-15 13:34:55.675743] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.684 [2024-05-15 13:34:55.696091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.684 [2024-05-15 13:34:55.696136] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.684 [2024-05-15 13:34:55.713053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.684 [2024-05-15 13:34:55.713100] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.684 [2024-05-15 13:34:55.729967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.684 [2024-05-15 13:34:55.730013] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.684 [2024-05-15 13:34:55.746988] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.684 [2024-05-15 13:34:55.747039] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.684 [2024-05-15 13:34:55.763194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.684 [2024-05-15 13:34:55.763258] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.684 [2024-05-15 13:34:55.781223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.684 [2024-05-15 13:34:55.781283] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.941 [2024-05-15 13:34:55.796922] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.941 [2024-05-15 13:34:55.796964] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.941 [2024-05-15 13:34:55.808244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.941 [2024-05-15 13:34:55.808293] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.941 [2024-05-15 13:34:55.823832] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.941 [2024-05-15 13:34:55.823872] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.942 [2024-05-15 13:34:55.834967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.942 [2024-05-15 13:34:55.835005] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.942 [2024-05-15 13:34:55.850838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.942 [2024-05-15 13:34:55.850875] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.942 [2024-05-15 13:34:55.860907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.942 [2024-05-15 13:34:55.860942] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.942 [2024-05-15 13:34:55.876545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.942 [2024-05-15 13:34:55.876580] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.942 [2024-05-15 13:34:55.892533] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.942 [2024-05-15 13:34:55.892568] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.942 [2024-05-15 13:34:55.901078] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.942 [2024-05-15 13:34:55.901116] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.942 [2024-05-15 13:34:55.915816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.942 [2024-05-15 13:34:55.915863] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.942 [2024-05-15 13:34:55.932024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.942 [2024-05-15 13:34:55.932077] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.942 [2024-05-15 13:34:55.943733] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.942 [2024-05-15 13:34:55.943794] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.942 00:16:42.942 Latency(us) 00:16:42.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.942 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:42.942 Nvme1n1 : 5.01 12879.44 100.62 0.00 0.00 9925.98 3464.05 21346.01 00:16:42.942 =================================================================================================================== 00:16:42.942 Total : 12879.44 100.62 0.00 0.00 9925.98 3464.05 21346.01 00:16:42.942 [2024-05-15 13:34:55.953226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.942 [2024-05-15 13:34:55.953284] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.942 [2024-05-15 13:34:55.965225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.942 [2024-05-15 13:34:55.965282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.942 [2024-05-15 13:34:55.977209] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.942 [2024-05-15 13:34:55.977253] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.942 [2024-05-15 13:34:55.989220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.942 [2024-05-15 13:34:55.989271] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.942 [2024-05-15 13:34:56.001214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.942 [2024-05-15 13:34:56.001258] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.942 [2024-05-15 13:34:56.013212] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.942 [2024-05-15 13:34:56.013253] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.942 [2024-05-15 13:34:56.025219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.942 [2024-05-15 13:34:56.025263] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:42.942 [2024-05-15 13:34:56.037219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:42.942 [2024-05-15 13:34:56.037260] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.201 [2024-05-15 13:34:56.049217] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.201 [2024-05-15 13:34:56.049257] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.201 [2024-05-15 13:34:56.061221] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.201 [2024-05-15 13:34:56.061256] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.201 [2024-05-15 13:34:56.073237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.201 [2024-05-15 13:34:56.073280] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.201 [2024-05-15 13:34:56.085233] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.201 [2024-05-15 13:34:56.085271] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.201 [2024-05-15 13:34:56.097233] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.201 [2024-05-15 13:34:56.097268] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.201 [2024-05-15 13:34:56.109243] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.201 [2024-05-15 13:34:56.109284] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.201 [2024-05-15 13:34:56.121236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.201 [2024-05-15 13:34:56.121268] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.201 [2024-05-15 13:34:56.133245] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.201 [2024-05-15 13:34:56.133275] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.201 [2024-05-15 13:34:56.145253] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:43.201 [2024-05-15 13:34:56.145283] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:43.201 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (80716) - No such process 00:16:43.201 13:34:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 80716 00:16:43.201 13:34:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:43.201 13:34:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.201 13:34:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:43.201 13:34:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.201 13:34:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:43.201 13:34:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.201 13:34:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:43.201 delay0 00:16:43.201 13:34:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.201 13:34:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:43.201 13:34:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.201 13:34:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:43.201 13:34:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.201 13:34:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:43.470 [2024-05-15 13:34:56.348199] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:51.596 Initializing NVMe Controllers 00:16:51.596 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:51.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:51.596 Initialization complete. Launching workers. 00:16:51.596 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 271, failed: 20490 00:16:51.596 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 20674, failed to submit 87 00:16:51.596 success 20553, unsuccess 121, failed 0 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:51.596 rmmod nvme_tcp 00:16:51.596 rmmod nvme_fabrics 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 80572 ']' 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 80572 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 80572 ']' 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 80572 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80572 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:51.596 killing process with pid 80572 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80572' 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 80572 00:16:51.596 [2024-05-15 13:35:03.482405] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 80572 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:51.596 00:16:51.596 real 0m25.425s 00:16:51.596 user 0m40.109s 00:16:51.596 sys 0m7.912s 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:51.596 ************************************ 00:16:51.596 END TEST nvmf_zcopy 00:16:51.596 ************************************ 00:16:51.596 13:35:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:51.596 13:35:03 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:51.596 13:35:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:51.596 13:35:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:51.596 13:35:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:51.596 ************************************ 00:16:51.596 START TEST nvmf_nmic 00:16:51.596 ************************************ 00:16:51.596 13:35:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:51.596 * Looking for test storage... 00:16:51.596 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:51.596 13:35:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:51.596 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:51.596 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.596 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.596 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.596 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.596 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.596 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.596 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.596 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.596 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.596 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.596 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:16:51.596 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:16:51.596 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.596 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.596 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:51.596 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.596 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:51.596 13:35:03 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.596 13:35:03 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.596 13:35:03 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:51.597 Cannot find device "nvmf_tgt_br" 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:51.597 Cannot find device "nvmf_tgt_br2" 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:51.597 Cannot find device "nvmf_tgt_br" 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:51.597 Cannot find device "nvmf_tgt_br2" 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:51.597 13:35:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:51.597 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:51.597 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:51.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:16:51.597 00:16:51.597 --- 10.0.0.2 ping statistics --- 00:16:51.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.597 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:51.597 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:51.597 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:16:51.597 00:16:51.597 --- 10.0.0.3 ping statistics --- 00:16:51.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.597 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:51.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:16:51.597 00:16:51.597 --- 10.0.0.1 ping statistics --- 00:16:51.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.597 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=81051 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 81051 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 81051 ']' 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:51.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.597 [2024-05-15 13:35:04.316208] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:16:51.597 [2024-05-15 13:35:04.317134] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.597 [2024-05-15 13:35:04.448584] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:51.597 [2024-05-15 13:35:04.467727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:51.597 [2024-05-15 13:35:04.521310] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.597 [2024-05-15 13:35:04.521382] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.597 [2024-05-15 13:35:04.521393] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.597 [2024-05-15 13:35:04.521407] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.597 [2024-05-15 13:35:04.521416] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.597 [2024-05-15 13:35:04.521926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.597 [2024-05-15 13:35:04.522001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.597 [2024-05-15 13:35:04.522738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:51.597 [2024-05-15 13:35:04.522746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.597 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.597 [2024-05-15 13:35:04.672756] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.855 Malloc0 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.855 [2024-05-15 13:35:04.744780] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:51.855 [2024-05-15 13:35:04.745199] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.855 test case1: single bdev can't be used in multiple subsystems 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.855 [2024-05-15 13:35:04.768851] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:51.855 [2024-05-15 13:35:04.768897] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:51.855 [2024-05-15 13:35:04.768910] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:51.855 request: 00:16:51.855 { 00:16:51.855 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:51.855 "namespace": { 00:16:51.855 "bdev_name": "Malloc0", 00:16:51.855 "no_auto_visible": false 00:16:51.855 }, 00:16:51.855 "method": "nvmf_subsystem_add_ns", 00:16:51.855 "req_id": 1 00:16:51.855 } 00:16:51.855 Got JSON-RPC error response 00:16:51.855 response: 00:16:51.855 { 00:16:51.855 "code": -32602, 00:16:51.855 "message": "Invalid parameters" 00:16:51.855 } 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:51.855 13:35:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:51.855 Adding namespace failed - expected result. 00:16:51.856 13:35:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:51.856 test case2: host connect to nvmf target in multiple paths 00:16:51.856 13:35:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:51.856 13:35:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:51.856 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.856 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:51.856 [2024-05-15 13:35:04.781007] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:51.856 13:35:04 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.856 13:35:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:51.856 13:35:04 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:52.113 13:35:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:52.113 13:35:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:16:52.113 13:35:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:52.113 13:35:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:16:52.113 13:35:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:16:54.011 13:35:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:54.011 13:35:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:54.011 13:35:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:54.011 13:35:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:16:54.011 13:35:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:54.011 13:35:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:16:54.011 13:35:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:54.011 [global] 00:16:54.011 thread=1 00:16:54.011 invalidate=1 00:16:54.011 rw=write 00:16:54.011 time_based=1 00:16:54.011 runtime=1 00:16:54.011 ioengine=libaio 00:16:54.011 direct=1 00:16:54.011 bs=4096 00:16:54.011 iodepth=1 00:16:54.011 norandommap=0 00:16:54.011 numjobs=1 00:16:54.011 00:16:54.011 verify_dump=1 00:16:54.011 verify_backlog=512 00:16:54.011 verify_state_save=0 00:16:54.011 do_verify=1 00:16:54.011 verify=crc32c-intel 00:16:54.011 [job0] 00:16:54.011 filename=/dev/nvme0n1 00:16:54.011 Could not set queue depth (nvme0n1) 00:16:54.269 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:54.269 fio-3.35 00:16:54.269 Starting 1 thread 00:16:55.644 00:16:55.644 job0: (groupid=0, jobs=1): err= 0: pid=81134: Wed May 15 13:35:08 2024 00:16:55.644 read: IOPS=3200, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1001msec) 00:16:55.644 slat (nsec): min=8724, max=42690, avg=10824.78, stdev=2813.31 00:16:55.644 clat (usec): min=119, max=603, avg=166.53, stdev=23.25 00:16:55.644 lat (usec): min=130, max=621, avg=177.35, stdev=23.37 00:16:55.644 clat percentiles (usec): 00:16:55.644 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 139], 20.00th=[ 149], 00:16:55.644 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 172], 00:16:55.644 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 198], 00:16:55.644 | 99.00th=[ 215], 99.50th=[ 225], 99.90th=[ 408], 99.95th=[ 553], 00:16:55.644 | 99.99th=[ 603] 00:16:55.644 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:16:55.644 slat (usec): min=13, max=261, avg=16.88, stdev= 7.94 00:16:55.644 clat (usec): min=44, max=596, avg=101.45, stdev=21.99 00:16:55.644 lat (usec): min=88, max=611, avg=118.33, stdev=23.74 00:16:55.644 clat percentiles (usec): 00:16:55.644 | 1.00th=[ 78], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 89], 00:16:55.644 | 30.00th=[ 93], 40.00th=[ 96], 50.00th=[ 99], 60.00th=[ 103], 00:16:55.644 | 70.00th=[ 108], 80.00th=[ 113], 90.00th=[ 120], 95.00th=[ 125], 00:16:55.644 | 99.00th=[ 143], 99.50th=[ 165], 99.90th=[ 494], 99.95th=[ 578], 00:16:55.644 | 99.99th=[ 594] 00:16:55.644 bw ( KiB/s): min=16384, max=16384, per=100.00%, avg=16384.00, stdev= 0.00, samples=1 00:16:55.644 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:16:55.644 lat (usec) : 50=0.01%, 100=27.62%, 250=72.11%, 500=0.16%, 750=0.09% 00:16:55.644 cpu : usr=1.70%, sys=7.70%, ctx=6790, majf=0, minf=2 00:16:55.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:55.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:55.644 issued rwts: total=3204,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:55.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:55.644 00:16:55.644 Run status group 0 (all jobs): 00:16:55.644 READ: bw=12.5MiB/s (13.1MB/s), 12.5MiB/s-12.5MiB/s (13.1MB/s-13.1MB/s), io=12.5MiB (13.1MB), run=1001-1001msec 00:16:55.644 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:16:55.644 00:16:55.644 Disk stats (read/write): 00:16:55.644 nvme0n1: ios=3082/3072, merge=0/0, ticks=538/322, in_queue=860, util=91.38% 00:16:55.644 13:35:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:55.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:55.644 13:35:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:55.644 13:35:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:16:55.644 13:35:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:55.644 13:35:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:55.644 13:35:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:55.644 13:35:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:55.644 13:35:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:16:55.644 13:35:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:55.644 13:35:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:55.644 13:35:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:55.644 13:35:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:55.644 13:35:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:55.644 13:35:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:55.644 13:35:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:55.645 13:35:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:55.645 rmmod nvme_tcp 00:16:55.645 rmmod nvme_fabrics 00:16:55.645 13:35:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:55.645 13:35:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:55.645 13:35:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:55.645 13:35:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 81051 ']' 00:16:55.645 13:35:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 81051 00:16:55.645 13:35:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 81051 ']' 00:16:55.645 13:35:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 81051 00:16:55.645 13:35:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:16:55.645 13:35:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:55.645 13:35:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81051 00:16:55.645 13:35:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:55.645 13:35:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:55.645 13:35:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81051' 00:16:55.645 killing process with pid 81051 00:16:55.645 13:35:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 81051 00:16:55.645 [2024-05-15 13:35:08.645567] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:55.645 13:35:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 81051 00:16:55.902 13:35:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:55.903 13:35:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:55.903 13:35:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:55.903 13:35:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:55.903 13:35:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:55.903 13:35:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.903 13:35:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:55.903 13:35:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.903 13:35:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:55.903 00:16:55.903 real 0m5.130s 00:16:55.903 user 0m15.442s 00:16:55.903 sys 0m2.553s 00:16:55.903 13:35:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:55.903 13:35:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:55.903 ************************************ 00:16:55.903 END TEST nvmf_nmic 00:16:55.903 ************************************ 00:16:55.903 13:35:08 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:55.903 13:35:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:55.903 13:35:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:55.903 13:35:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:55.903 ************************************ 00:16:55.903 START TEST nvmf_fio_target 00:16:55.903 ************************************ 00:16:55.903 13:35:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:56.160 * Looking for test storage... 00:16:56.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:56.160 Cannot find device "nvmf_tgt_br" 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:16:56.160 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:56.161 Cannot find device "nvmf_tgt_br2" 00:16:56.161 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:16:56.161 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:56.161 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:56.161 Cannot find device "nvmf_tgt_br" 00:16:56.161 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:16:56.161 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:56.161 Cannot find device "nvmf_tgt_br2" 00:16:56.161 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:16:56.161 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:56.161 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:56.161 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:56.161 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.161 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:16:56.161 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:56.161 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.161 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:16:56.161 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:56.161 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:56.161 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:56.161 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:56.161 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:56.161 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:56.419 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:56.419 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:56.419 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:56.419 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:56.419 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:56.419 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:56.419 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:56.419 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:56.419 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:56.419 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:56.419 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:56.419 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:56.419 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:56.419 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:56.419 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:56.419 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:56.419 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:56.419 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:56.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:16:56.419 00:16:56.419 --- 10.0.0.2 ping statistics --- 00:16:56.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.419 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:16:56.419 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:56.419 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:56.419 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:16:56.419 00:16:56.419 --- 10.0.0.3 ping statistics --- 00:16:56.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.419 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:56.420 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:56.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:16:56.420 00:16:56.420 --- 10.0.0.1 ping statistics --- 00:16:56.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.420 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:16:56.420 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.420 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:16:56.420 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:56.420 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.420 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:56.420 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:56.420 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.420 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:56.420 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:56.420 13:35:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:56.420 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:56.420 13:35:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:56.420 13:35:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.420 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=81312 00:16:56.420 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:56.420 13:35:09 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 81312 00:16:56.420 13:35:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 81312 ']' 00:16:56.420 13:35:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.420 13:35:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:56.420 13:35:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.420 13:35:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:56.420 13:35:09 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.420 [2024-05-15 13:35:09.496097] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:16:56.420 [2024-05-15 13:35:09.496214] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.677 [2024-05-15 13:35:09.639138] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:56.677 [2024-05-15 13:35:09.650716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:56.677 [2024-05-15 13:35:09.705336] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.677 [2024-05-15 13:35:09.705391] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.677 [2024-05-15 13:35:09.705403] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.677 [2024-05-15 13:35:09.705412] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.677 [2024-05-15 13:35:09.705421] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.677 [2024-05-15 13:35:09.705514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.677 [2024-05-15 13:35:09.705675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:56.677 [2024-05-15 13:35:09.706116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:56.677 [2024-05-15 13:35:09.706122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.612 13:35:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:57.612 13:35:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:16:57.612 13:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:57.612 13:35:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:57.612 13:35:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.612 13:35:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.612 13:35:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:57.870 [2024-05-15 13:35:10.733563] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.870 13:35:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:58.128 13:35:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:58.128 13:35:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:58.386 13:35:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:58.386 13:35:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:58.643 13:35:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:58.643 13:35:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:58.901 13:35:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:58.901 13:35:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:59.159 13:35:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:59.417 13:35:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:59.417 13:35:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:59.676 13:35:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:59.676 13:35:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:00.242 13:35:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:00.242 13:35:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:00.500 13:35:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:00.757 13:35:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:00.757 13:35:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:01.014 13:35:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:01.014 13:35:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:01.271 13:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:01.556 [2024-05-15 13:35:14.438091] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:01.556 [2024-05-15 13:35:14.438825] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:01.556 13:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:01.818 13:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:02.075 13:35:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:02.075 13:35:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:02.075 13:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:17:02.075 13:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:17:02.075 13:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:17:02.076 13:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:17:02.076 13:35:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:17:04.602 13:35:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:17:04.602 13:35:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:17:04.602 13:35:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:17:04.602 13:35:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:17:04.602 13:35:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:17:04.602 13:35:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:17:04.602 13:35:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:04.602 [global] 00:17:04.602 thread=1 00:17:04.602 invalidate=1 00:17:04.602 rw=write 00:17:04.602 time_based=1 00:17:04.602 runtime=1 00:17:04.602 ioengine=libaio 00:17:04.602 direct=1 00:17:04.602 bs=4096 00:17:04.602 iodepth=1 00:17:04.602 norandommap=0 00:17:04.602 numjobs=1 00:17:04.602 00:17:04.602 verify_dump=1 00:17:04.602 verify_backlog=512 00:17:04.602 verify_state_save=0 00:17:04.602 do_verify=1 00:17:04.602 verify=crc32c-intel 00:17:04.602 [job0] 00:17:04.602 filename=/dev/nvme0n1 00:17:04.602 [job1] 00:17:04.602 filename=/dev/nvme0n2 00:17:04.602 [job2] 00:17:04.602 filename=/dev/nvme0n3 00:17:04.602 [job3] 00:17:04.602 filename=/dev/nvme0n4 00:17:04.602 Could not set queue depth (nvme0n1) 00:17:04.602 Could not set queue depth (nvme0n2) 00:17:04.602 Could not set queue depth (nvme0n3) 00:17:04.602 Could not set queue depth (nvme0n4) 00:17:04.602 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:04.602 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:04.602 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:04.602 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:04.602 fio-3.35 00:17:04.602 Starting 4 threads 00:17:05.542 00:17:05.542 job0: (groupid=0, jobs=1): err= 0: pid=81503: Wed May 15 13:35:18 2024 00:17:05.542 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:17:05.542 slat (nsec): min=8671, max=96167, avg=15464.62, stdev=7089.00 00:17:05.542 clat (usec): min=157, max=929, avg=247.52, stdev=36.21 00:17:05.542 lat (usec): min=168, max=942, avg=262.98, stdev=38.82 00:17:05.542 clat percentiles (usec): 00:17:05.542 | 1.00th=[ 184], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 221], 00:17:05.542 | 30.00th=[ 227], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 253], 00:17:05.542 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 306], 00:17:05.542 | 99.00th=[ 338], 99.50th=[ 351], 99.90th=[ 383], 99.95th=[ 498], 00:17:05.542 | 99.99th=[ 930] 00:17:05.542 write: IOPS=2310, BW=9243KiB/s (9465kB/s)(9252KiB/1001msec); 0 zone resets 00:17:05.542 slat (usec): min=12, max=109, avg=23.75, stdev=10.11 00:17:05.542 clat (usec): min=101, max=650, avg=172.36, stdev=31.20 00:17:05.542 lat (usec): min=114, max=727, avg=196.11, stdev=34.19 00:17:05.542 clat percentiles (usec): 00:17:05.543 | 1.00th=[ 120], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 147], 00:17:05.543 | 30.00th=[ 153], 40.00th=[ 161], 50.00th=[ 169], 60.00th=[ 178], 00:17:05.543 | 70.00th=[ 186], 80.00th=[ 196], 90.00th=[ 210], 95.00th=[ 225], 00:17:05.543 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 343], 99.95th=[ 383], 00:17:05.543 | 99.99th=[ 652] 00:17:05.543 bw ( KiB/s): min= 8192, max= 8192, per=25.94%, avg=8192.00, stdev= 0.00, samples=1 00:17:05.543 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:17:05.543 lat (usec) : 250=78.67%, 500=21.28%, 750=0.02%, 1000=0.02% 00:17:05.543 cpu : usr=2.30%, sys=6.50%, ctx=4364, majf=0, minf=3 00:17:05.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.543 issued rwts: total=2048,2313,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:05.543 job1: (groupid=0, jobs=1): err= 0: pid=81504: Wed May 15 13:35:18 2024 00:17:05.543 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:17:05.543 slat (usec): min=7, max=532, avg=10.44, stdev=10.56 00:17:05.543 clat (usec): min=3, max=490, avg=187.76, stdev=24.92 00:17:05.543 lat (usec): min=141, max=535, avg=198.19, stdev=26.00 00:17:05.543 clat percentiles (usec): 00:17:05.543 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:17:05.543 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 186], 60.00th=[ 192], 00:17:05.543 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 221], 95.00th=[ 231], 00:17:05.543 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 343], 99.95th=[ 379], 00:17:05.543 | 99.99th=[ 490] 00:17:05.543 write: IOPS=2839, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec); 0 zone resets 00:17:05.543 slat (nsec): min=10484, max=92077, avg=18575.37, stdev=6524.38 00:17:05.543 clat (usec): min=94, max=1311, avg=152.29, stdev=98.38 00:17:05.543 lat (usec): min=109, max=1330, avg=170.86, stdev=102.18 00:17:05.543 clat percentiles (usec): 00:17:05.543 | 1.00th=[ 102], 5.00th=[ 109], 10.00th=[ 113], 20.00th=[ 119], 00:17:05.543 | 30.00th=[ 125], 40.00th=[ 130], 50.00th=[ 137], 60.00th=[ 143], 00:17:05.543 | 70.00th=[ 149], 80.00th=[ 157], 90.00th=[ 172], 95.00th=[ 186], 00:17:05.543 | 99.00th=[ 775], 99.50th=[ 857], 99.90th=[ 906], 99.95th=[ 922], 00:17:05.543 | 99.99th=[ 1319] 00:17:05.543 bw ( KiB/s): min=12288, max=12288, per=38.92%, avg=12288.00, stdev= 0.00, samples=1 00:17:05.543 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:17:05.543 lat (usec) : 4=0.02%, 100=0.30%, 250=97.93%, 500=0.52%, 750=0.56% 00:17:05.543 lat (usec) : 1000=0.67% 00:17:05.543 lat (msec) : 2=0.02% 00:17:05.543 cpu : usr=1.60%, sys=6.60%, ctx=5402, majf=0, minf=9 00:17:05.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.543 issued rwts: total=2560,2842,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:05.543 job2: (groupid=0, jobs=1): err= 0: pid=81505: Wed May 15 13:35:18 2024 00:17:05.543 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:17:05.543 slat (usec): min=10, max=127, avg=24.05, stdev=10.30 00:17:05.543 clat (usec): min=235, max=1682, avg=503.57, stdev=206.54 00:17:05.543 lat (usec): min=255, max=1746, avg=527.62, stdev=212.16 00:17:05.543 clat percentiles (usec): 00:17:05.543 | 1.00th=[ 253], 5.00th=[ 269], 10.00th=[ 285], 20.00th=[ 310], 00:17:05.543 | 30.00th=[ 326], 40.00th=[ 367], 50.00th=[ 498], 60.00th=[ 545], 00:17:05.543 | 70.00th=[ 586], 80.00th=[ 693], 90.00th=[ 766], 95.00th=[ 848], 00:17:05.543 | 99.00th=[ 1057], 99.50th=[ 1106], 99.90th=[ 1336], 99.95th=[ 1680], 00:17:05.543 | 99.99th=[ 1680] 00:17:05.543 write: IOPS=1343, BW=5375KiB/s (5504kB/s)(5380KiB/1001msec); 0 zone resets 00:17:05.543 slat (usec): min=15, max=134, avg=30.03, stdev=11.67 00:17:05.543 clat (usec): min=108, max=1483, avg=307.05, stdev=143.23 00:17:05.543 lat (usec): min=130, max=1508, avg=337.08, stdev=148.18 00:17:05.543 clat percentiles (usec): 00:17:05.543 | 1.00th=[ 128], 5.00th=[ 182], 10.00th=[ 202], 20.00th=[ 212], 00:17:05.543 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 243], 60.00th=[ 260], 00:17:05.543 | 70.00th=[ 326], 80.00th=[ 437], 90.00th=[ 519], 95.00th=[ 562], 00:17:05.543 | 99.00th=[ 824], 99.50th=[ 865], 99.90th=[ 1336], 99.95th=[ 1483], 00:17:05.543 | 99.99th=[ 1483] 00:17:05.543 bw ( KiB/s): min= 4096, max= 4096, per=12.97%, avg=4096.00, stdev= 0.00, samples=1 00:17:05.543 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:05.543 lat (usec) : 250=31.41%, 500=40.19%, 750=22.58%, 1000=4.64% 00:17:05.543 lat (msec) : 2=1.18% 00:17:05.543 cpu : usr=1.00%, sys=5.70%, ctx=2388, majf=0, minf=12 00:17:05.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.543 issued rwts: total=1024,1345,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:05.543 job3: (groupid=0, jobs=1): err= 0: pid=81506: Wed May 15 13:35:18 2024 00:17:05.543 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:17:05.543 slat (usec): min=11, max=114, avg=23.64, stdev=10.93 00:17:05.543 clat (usec): min=171, max=1698, avg=508.19, stdev=213.43 00:17:05.543 lat (usec): min=184, max=1743, avg=531.82, stdev=218.77 00:17:05.543 clat percentiles (usec): 00:17:05.543 | 1.00th=[ 255], 5.00th=[ 269], 10.00th=[ 285], 20.00th=[ 314], 00:17:05.543 | 30.00th=[ 334], 40.00th=[ 379], 50.00th=[ 502], 60.00th=[ 545], 00:17:05.543 | 70.00th=[ 586], 80.00th=[ 685], 90.00th=[ 758], 95.00th=[ 947], 00:17:05.543 | 99.00th=[ 1106], 99.50th=[ 1188], 99.90th=[ 1631], 99.95th=[ 1696], 00:17:05.543 | 99.99th=[ 1696] 00:17:05.543 write: IOPS=1400, BW=5602KiB/s (5737kB/s)(5608KiB/1001msec); 0 zone resets 00:17:05.543 slat (usec): min=14, max=202, avg=31.33, stdev=14.04 00:17:05.543 clat (usec): min=106, max=1769, avg=288.92, stdev=136.70 00:17:05.543 lat (usec): min=126, max=1802, avg=320.25, stdev=140.57 00:17:05.543 clat percentiles (usec): 00:17:05.543 | 1.00th=[ 128], 5.00th=[ 149], 10.00th=[ 184], 20.00th=[ 202], 00:17:05.543 | 30.00th=[ 215], 40.00th=[ 227], 50.00th=[ 239], 60.00th=[ 249], 00:17:05.543 | 70.00th=[ 281], 80.00th=[ 420], 90.00th=[ 506], 95.00th=[ 553], 00:17:05.543 | 99.00th=[ 627], 99.50th=[ 816], 99.90th=[ 1012], 99.95th=[ 1762], 00:17:05.543 | 99.99th=[ 1762] 00:17:05.543 bw ( KiB/s): min= 4096, max= 4096, per=12.97%, avg=4096.00, stdev= 0.00, samples=1 00:17:05.543 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:05.543 lat (usec) : 250=35.49%, 500=37.35%, 750=22.05%, 1000=3.42% 00:17:05.543 lat (msec) : 2=1.69% 00:17:05.543 cpu : usr=1.70%, sys=5.30%, ctx=2427, majf=0, minf=11 00:17:05.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:05.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.543 issued rwts: total=1024,1402,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:05.543 00:17:05.543 Run status group 0 (all jobs): 00:17:05.543 READ: bw=26.0MiB/s (27.2MB/s), 4092KiB/s-9.99MiB/s (4190kB/s-10.5MB/s), io=26.0MiB (27.3MB), run=1001-1001msec 00:17:05.543 WRITE: bw=30.8MiB/s (32.3MB/s), 5375KiB/s-11.1MiB/s (5504kB/s-11.6MB/s), io=30.9MiB (32.4MB), run=1001-1001msec 00:17:05.543 00:17:05.543 Disk stats (read/write): 00:17:05.543 nvme0n1: ios=1652/2048, merge=0/0, ticks=436/376, in_queue=812, util=86.57% 00:17:05.543 nvme0n2: ios=2148/2560, merge=0/0, ticks=424/369, in_queue=793, util=86.87% 00:17:05.543 nvme0n3: ios=830/1024, merge=0/0, ticks=460/347, in_queue=807, util=89.06% 00:17:05.543 nvme0n4: ios=851/1024, merge=0/0, ticks=470/330, in_queue=800, util=89.63% 00:17:05.543 13:35:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:05.543 [global] 00:17:05.543 thread=1 00:17:05.543 invalidate=1 00:17:05.543 rw=randwrite 00:17:05.543 time_based=1 00:17:05.543 runtime=1 00:17:05.543 ioengine=libaio 00:17:05.543 direct=1 00:17:05.543 bs=4096 00:17:05.543 iodepth=1 00:17:05.543 norandommap=0 00:17:05.543 numjobs=1 00:17:05.543 00:17:05.543 verify_dump=1 00:17:05.543 verify_backlog=512 00:17:05.543 verify_state_save=0 00:17:05.543 do_verify=1 00:17:05.543 verify=crc32c-intel 00:17:05.543 [job0] 00:17:05.543 filename=/dev/nvme0n1 00:17:05.543 [job1] 00:17:05.543 filename=/dev/nvme0n2 00:17:05.543 [job2] 00:17:05.543 filename=/dev/nvme0n3 00:17:05.543 [job3] 00:17:05.543 filename=/dev/nvme0n4 00:17:05.543 Could not set queue depth (nvme0n1) 00:17:05.543 Could not set queue depth (nvme0n2) 00:17:05.543 Could not set queue depth (nvme0n3) 00:17:05.543 Could not set queue depth (nvme0n4) 00:17:05.800 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:05.800 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:05.800 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:05.800 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:05.800 fio-3.35 00:17:05.800 Starting 4 threads 00:17:07.220 00:17:07.220 job0: (groupid=0, jobs=1): err= 0: pid=81559: Wed May 15 13:35:19 2024 00:17:07.220 read: IOPS=2622, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1000msec) 00:17:07.220 slat (usec): min=8, max=540, avg=12.94, stdev=14.56 00:17:07.220 clat (usec): min=5, max=1327, avg=183.75, stdev=44.61 00:17:07.220 lat (usec): min=144, max=1362, avg=196.69, stdev=48.15 00:17:07.220 clat percentiles (usec): 00:17:07.220 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:17:07.220 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 184], 00:17:07.220 | 70.00th=[ 188], 80.00th=[ 196], 90.00th=[ 208], 95.00th=[ 223], 00:17:07.220 | 99.00th=[ 297], 99.50th=[ 363], 99.90th=[ 873], 99.95th=[ 906], 00:17:07.220 | 99.99th=[ 1336] 00:17:07.220 write: IOPS=3072, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1000msec); 0 zone resets 00:17:07.220 slat (nsec): min=10391, max=90485, avg=19966.66, stdev=6316.58 00:17:07.220 clat (usec): min=94, max=4208, avg=134.80, stdev=75.51 00:17:07.220 lat (usec): min=108, max=4234, avg=154.76, stdev=76.39 00:17:07.220 clat percentiles (usec): 00:17:07.220 | 1.00th=[ 103], 5.00th=[ 112], 10.00th=[ 116], 20.00th=[ 120], 00:17:07.220 | 30.00th=[ 124], 40.00th=[ 128], 50.00th=[ 133], 60.00th=[ 137], 00:17:07.220 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 155], 95.00th=[ 165], 00:17:07.220 | 99.00th=[ 188], 99.50th=[ 200], 99.90th=[ 225], 99.95th=[ 318], 00:17:07.220 | 99.99th=[ 4228] 00:17:07.220 bw ( KiB/s): min=12288, max=12288, per=31.66%, avg=12288.00, stdev= 0.00, samples=1 00:17:07.220 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:17:07.220 lat (usec) : 10=0.02%, 100=0.28%, 250=98.60%, 500=0.97%, 750=0.05% 00:17:07.220 lat (usec) : 1000=0.05% 00:17:07.220 lat (msec) : 2=0.02%, 10=0.02% 00:17:07.220 cpu : usr=2.80%, sys=7.10%, ctx=5705, majf=0, minf=11 00:17:07.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:07.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.220 issued rwts: total=2622,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:07.220 job1: (groupid=0, jobs=1): err= 0: pid=81560: Wed May 15 13:35:19 2024 00:17:07.220 read: IOPS=2818, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1001msec) 00:17:07.220 slat (nsec): min=8089, max=44115, avg=10928.47, stdev=2962.40 00:17:07.220 clat (usec): min=141, max=752, avg=181.04, stdev=18.79 00:17:07.220 lat (usec): min=149, max=762, avg=191.97, stdev=19.57 00:17:07.220 clat percentiles (usec): 00:17:07.220 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:17:07.220 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:17:07.220 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 208], 00:17:07.220 | 99.00th=[ 225], 99.50th=[ 235], 99.90th=[ 255], 99.95th=[ 260], 00:17:07.220 | 99.99th=[ 750] 00:17:07.220 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:17:07.220 slat (nsec): min=12611, max=95506, avg=16813.86, stdev=4979.37 00:17:07.220 clat (usec): min=93, max=3890, avg=129.94, stdev=69.84 00:17:07.220 lat (usec): min=108, max=3919, avg=146.75, stdev=70.54 00:17:07.220 clat percentiles (usec): 00:17:07.220 | 1.00th=[ 103], 5.00th=[ 110], 10.00th=[ 114], 20.00th=[ 118], 00:17:07.220 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 130], 00:17:07.220 | 70.00th=[ 133], 80.00th=[ 139], 90.00th=[ 149], 95.00th=[ 157], 00:17:07.220 | 99.00th=[ 178], 99.50th=[ 194], 99.90th=[ 289], 99.95th=[ 424], 00:17:07.220 | 99.99th=[ 3884] 00:17:07.220 bw ( KiB/s): min=12288, max=12288, per=31.66%, avg=12288.00, stdev= 0.00, samples=1 00:17:07.220 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:17:07.220 lat (usec) : 100=0.24%, 250=99.64%, 500=0.08%, 1000=0.02% 00:17:07.220 lat (msec) : 4=0.02% 00:17:07.220 cpu : usr=1.70%, sys=6.90%, ctx=5893, majf=0, minf=8 00:17:07.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:07.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.220 issued rwts: total=2821,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:07.220 job2: (groupid=0, jobs=1): err= 0: pid=81561: Wed May 15 13:35:19 2024 00:17:07.220 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:17:07.220 slat (nsec): min=8286, max=78715, avg=18627.61, stdev=7852.25 00:17:07.220 clat (usec): min=190, max=3439, avg=324.20, stdev=131.53 00:17:07.220 lat (usec): min=213, max=3508, avg=342.83, stdev=132.49 00:17:07.220 clat percentiles (usec): 00:17:07.220 | 1.00th=[ 225], 5.00th=[ 245], 10.00th=[ 253], 20.00th=[ 265], 00:17:07.220 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 314], 00:17:07.220 | 70.00th=[ 330], 80.00th=[ 347], 90.00th=[ 388], 95.00th=[ 537], 00:17:07.220 | 99.00th=[ 709], 99.50th=[ 963], 99.90th=[ 1811], 99.95th=[ 3425], 00:17:07.220 | 99.99th=[ 3425] 00:17:07.220 write: IOPS=1858, BW=7433KiB/s (7611kB/s)(7440KiB/1001msec); 0 zone resets 00:17:07.220 slat (usec): min=13, max=148, avg=28.79, stdev=12.23 00:17:07.220 clat (usec): min=104, max=1340, avg=221.91, stdev=77.79 00:17:07.220 lat (usec): min=124, max=1412, avg=250.70, stdev=80.99 00:17:07.220 clat percentiles (usec): 00:17:07.220 | 1.00th=[ 118], 5.00th=[ 135], 10.00th=[ 145], 20.00th=[ 174], 00:17:07.220 | 30.00th=[ 194], 40.00th=[ 206], 50.00th=[ 219], 60.00th=[ 229], 00:17:07.220 | 70.00th=[ 243], 80.00th=[ 260], 90.00th=[ 281], 95.00th=[ 306], 00:17:07.220 | 99.00th=[ 441], 99.50th=[ 693], 99.90th=[ 1188], 99.95th=[ 1336], 00:17:07.220 | 99.99th=[ 1336] 00:17:07.220 bw ( KiB/s): min= 8192, max= 8192, per=21.10%, avg=8192.00, stdev= 0.00, samples=1 00:17:07.220 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:17:07.220 lat (usec) : 250=44.29%, 500=52.24%, 750=3.00%, 1000=0.21% 00:17:07.220 lat (msec) : 2=0.24%, 4=0.03% 00:17:07.220 cpu : usr=1.40%, sys=7.00%, ctx=3396, majf=0, minf=17 00:17:07.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:07.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.220 issued rwts: total=1536,1860,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:07.220 job3: (groupid=0, jobs=1): err= 0: pid=81562: Wed May 15 13:35:19 2024 00:17:07.220 read: IOPS=1532, BW=6132KiB/s (6279kB/s)(6144KiB/1002msec) 00:17:07.220 slat (nsec): min=9934, max=85511, avg=16291.22, stdev=6424.24 00:17:07.220 clat (usec): min=167, max=3525, avg=320.89, stdev=112.92 00:17:07.220 lat (usec): min=179, max=3544, avg=337.18, stdev=113.84 00:17:07.220 clat percentiles (usec): 00:17:07.220 | 1.00th=[ 221], 5.00th=[ 249], 10.00th=[ 258], 20.00th=[ 269], 00:17:07.220 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 314], 00:17:07.220 | 70.00th=[ 330], 80.00th=[ 347], 90.00th=[ 404], 95.00th=[ 469], 00:17:07.220 | 99.00th=[ 701], 99.50th=[ 742], 99.90th=[ 979], 99.95th=[ 3523], 00:17:07.220 | 99.99th=[ 3523] 00:17:07.220 write: IOPS=1716, BW=6866KiB/s (7031kB/s)(6880KiB/1002msec); 0 zone resets 00:17:07.220 slat (usec): min=14, max=261, avg=28.73, stdev=16.15 00:17:07.220 clat (usec): min=114, max=1742, avg=248.17, stdev=97.66 00:17:07.220 lat (usec): min=133, max=1766, avg=276.91, stdev=102.96 00:17:07.220 clat percentiles (usec): 00:17:07.220 | 1.00th=[ 131], 5.00th=[ 143], 10.00th=[ 157], 20.00th=[ 196], 00:17:07.220 | 30.00th=[ 210], 40.00th=[ 223], 50.00th=[ 237], 60.00th=[ 249], 00:17:07.220 | 70.00th=[ 262], 80.00th=[ 285], 90.00th=[ 334], 95.00th=[ 388], 00:17:07.220 | 99.00th=[ 562], 99.50th=[ 766], 99.90th=[ 1565], 99.95th=[ 1745], 00:17:07.220 | 99.99th=[ 1745] 00:17:07.220 bw ( KiB/s): min= 8192, max= 8192, per=21.10%, avg=8192.00, stdev= 0.00, samples=1 00:17:07.220 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:17:07.220 lat (usec) : 250=35.26%, 500=62.25%, 750=2.00%, 1000=0.34% 00:17:07.220 lat (msec) : 2=0.12%, 4=0.03% 00:17:07.220 cpu : usr=1.80%, sys=5.69%, ctx=3290, majf=0, minf=11 00:17:07.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:07.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:07.220 issued rwts: total=1536,1720,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:07.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:07.220 00:17:07.220 Run status group 0 (all jobs): 00:17:07.220 READ: bw=33.2MiB/s (34.8MB/s), 6132KiB/s-11.0MiB/s (6279kB/s-11.5MB/s), io=33.3MiB (34.9MB), run=1000-1002msec 00:17:07.220 WRITE: bw=37.9MiB/s (39.8MB/s), 6866KiB/s-12.0MiB/s (7031kB/s-12.6MB/s), io=38.0MiB (39.8MB), run=1000-1002msec 00:17:07.220 00:17:07.220 Disk stats (read/write): 00:17:07.220 nvme0n1: ios=2151/2560, merge=0/0, ticks=441/362, in_queue=803, util=86.10% 00:17:07.220 nvme0n2: ios=2342/2560, merge=0/0, ticks=434/346, in_queue=780, util=85.68% 00:17:07.220 nvme0n3: ios=1273/1536, merge=0/0, ticks=428/344, in_queue=772, util=88.63% 00:17:07.220 nvme0n4: ios=1202/1536, merge=0/0, ticks=392/383, in_queue=775, util=89.48% 00:17:07.220 13:35:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:07.220 [global] 00:17:07.220 thread=1 00:17:07.220 invalidate=1 00:17:07.220 rw=write 00:17:07.220 time_based=1 00:17:07.220 runtime=1 00:17:07.220 ioengine=libaio 00:17:07.220 direct=1 00:17:07.220 bs=4096 00:17:07.220 iodepth=128 00:17:07.220 norandommap=0 00:17:07.220 numjobs=1 00:17:07.220 00:17:07.220 verify_dump=1 00:17:07.220 verify_backlog=512 00:17:07.220 verify_state_save=0 00:17:07.220 do_verify=1 00:17:07.220 verify=crc32c-intel 00:17:07.220 [job0] 00:17:07.220 filename=/dev/nvme0n1 00:17:07.220 [job1] 00:17:07.220 filename=/dev/nvme0n2 00:17:07.220 [job2] 00:17:07.220 filename=/dev/nvme0n3 00:17:07.220 [job3] 00:17:07.220 filename=/dev/nvme0n4 00:17:07.220 Could not set queue depth (nvme0n1) 00:17:07.220 Could not set queue depth (nvme0n2) 00:17:07.221 Could not set queue depth (nvme0n3) 00:17:07.221 Could not set queue depth (nvme0n4) 00:17:07.221 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:07.221 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:07.221 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:07.221 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:07.221 fio-3.35 00:17:07.221 Starting 4 threads 00:17:08.596 00:17:08.596 job0: (groupid=0, jobs=1): err= 0: pid=81615: Wed May 15 13:35:21 2024 00:17:08.596 read: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec) 00:17:08.596 slat (usec): min=8, max=15692, avg=244.26, stdev=1086.31 00:17:08.596 clat (usec): min=16706, max=61141, avg=27478.62, stdev=6893.27 00:17:08.596 lat (usec): min=16722, max=61178, avg=27722.88, stdev=7023.17 00:17:08.596 clat percentiles (usec): 00:17:08.596 | 1.00th=[19268], 5.00th=[20055], 10.00th=[20579], 20.00th=[21103], 00:17:08.596 | 30.00th=[22152], 40.00th=[23200], 50.00th=[26346], 60.00th=[29230], 00:17:08.596 | 70.00th=[31851], 80.00th=[32637], 90.00th=[34866], 95.00th=[39584], 00:17:08.596 | 99.00th=[48497], 99.50th=[50070], 99.90th=[57934], 99.95th=[61080], 00:17:08.596 | 99.99th=[61080] 00:17:08.596 write: IOPS=1778, BW=7116KiB/s (7286kB/s)(7144KiB/1004msec); 0 zone resets 00:17:08.596 slat (usec): min=8, max=17044, avg=339.62, stdev=1433.13 00:17:08.596 clat (usec): min=3775, max=72884, avg=46391.90, stdev=10671.06 00:17:08.596 lat (usec): min=5827, max=72914, avg=46731.52, stdev=10755.47 00:17:08.596 clat percentiles (usec): 00:17:08.596 | 1.00th=[11207], 5.00th=[29230], 10.00th=[34866], 20.00th=[40109], 00:17:08.596 | 30.00th=[41157], 40.00th=[43254], 50.00th=[46400], 60.00th=[50070], 00:17:08.596 | 70.00th=[51643], 80.00th=[54789], 90.00th=[58983], 95.00th=[63701], 00:17:08.596 | 99.00th=[67634], 99.50th=[68682], 99.90th=[70779], 99.95th=[72877], 00:17:08.596 | 99.99th=[72877] 00:17:08.596 bw ( KiB/s): min= 5896, max= 7376, per=12.31%, avg=6636.00, stdev=1046.52, samples=2 00:17:08.596 iops : min= 1474, max= 1844, avg=1659.00, stdev=261.63, samples=2 00:17:08.596 lat (msec) : 4=0.03%, 10=0.48%, 20=2.44%, 50=75.35%, 100=21.70% 00:17:08.596 cpu : usr=1.60%, sys=5.98%, ctx=263, majf=0, minf=4 00:17:08.596 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:17:08.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.596 issued rwts: total=1536,1786,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.596 job1: (groupid=0, jobs=1): err= 0: pid=81616: Wed May 15 13:35:21 2024 00:17:08.596 read: IOPS=5014, BW=19.6MiB/s (20.5MB/s)(19.6MiB/1002msec) 00:17:08.596 slat (usec): min=7, max=3786, avg=96.92, stdev=442.74 00:17:08.596 clat (usec): min=703, max=16432, avg=12511.28, stdev=1593.44 00:17:08.596 lat (usec): min=3138, max=16614, avg=12608.20, stdev=1544.01 00:17:08.596 clat percentiles (usec): 00:17:08.596 | 1.00th=[ 6587], 5.00th=[10683], 10.00th=[11207], 20.00th=[11863], 00:17:08.596 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:17:08.596 | 70.00th=[12649], 80.00th=[13042], 90.00th=[15008], 95.00th=[15533], 00:17:08.596 | 99.00th=[16188], 99.50th=[16319], 99.90th=[16450], 99.95th=[16450], 00:17:08.596 | 99.99th=[16450] 00:17:08.596 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:17:08.596 slat (usec): min=6, max=3843, avg=94.42, stdev=461.10 00:17:08.596 clat (usec): min=8127, max=16758, avg=12404.01, stdev=1768.04 00:17:08.596 lat (usec): min=9463, max=16768, avg=12498.43, stdev=1730.53 00:17:08.596 clat percentiles (usec): 00:17:08.596 | 1.00th=[ 9241], 5.00th=[10421], 10.00th=[10683], 20.00th=[11338], 00:17:08.596 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:17:08.596 | 70.00th=[12256], 80.00th=[13042], 90.00th=[15664], 95.00th=[16057], 00:17:08.596 | 99.00th=[16581], 99.50th=[16581], 99.90th=[16712], 99.95th=[16712], 00:17:08.596 | 99.99th=[16712] 00:17:08.596 bw ( KiB/s): min=20480, max=20480, per=37.98%, avg=20480.00, stdev= 0.00, samples=2 00:17:08.596 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:17:08.596 lat (usec) : 750=0.01% 00:17:08.596 lat (msec) : 4=0.32%, 10=2.49%, 20=97.18% 00:17:08.596 cpu : usr=2.50%, sys=9.99%, ctx=435, majf=0, minf=1 00:17:08.596 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:08.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.596 issued rwts: total=5025,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.596 job2: (groupid=0, jobs=1): err= 0: pid=81617: Wed May 15 13:35:21 2024 00:17:08.596 read: IOPS=3686, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1006msec) 00:17:08.596 slat (usec): min=7, max=16325, avg=136.05, stdev=838.76 00:17:08.596 clat (usec): min=4981, max=62238, avg=17459.02, stdev=8605.38 00:17:08.596 lat (usec): min=4996, max=62279, avg=17595.07, stdev=8679.98 00:17:08.596 clat percentiles (usec): 00:17:08.596 | 1.00th=[ 5735], 5.00th=[11469], 10.00th=[12780], 20.00th=[14091], 00:17:08.596 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14615], 60.00th=[14877], 00:17:08.596 | 70.00th=[15664], 80.00th=[17695], 90.00th=[25035], 95.00th=[44303], 00:17:08.596 | 99.00th=[49021], 99.50th=[49021], 99.90th=[50070], 99.95th=[61080], 00:17:08.596 | 99.99th=[62129] 00:17:08.596 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:17:08.596 slat (usec): min=7, max=8361, avg=112.75, stdev=601.16 00:17:08.596 clat (usec): min=6645, max=34127, avg=15194.25, stdev=2572.75 00:17:08.596 lat (usec): min=6664, max=34152, avg=15307.00, stdev=2639.11 00:17:08.596 clat percentiles (usec): 00:17:08.596 | 1.00th=[ 9896], 5.00th=[12387], 10.00th=[13042], 20.00th=[13566], 00:17:08.596 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14746], 60.00th=[15139], 00:17:08.596 | 70.00th=[16188], 80.00th=[17171], 90.00th=[17957], 95.00th=[18482], 00:17:08.596 | 99.00th=[26608], 99.50th=[26870], 99.90th=[27132], 99.95th=[27132], 00:17:08.596 | 99.99th=[34341] 00:17:08.596 bw ( KiB/s): min=13720, max=19032, per=30.37%, avg=16376.00, stdev=3756.15, samples=2 00:17:08.596 iops : min= 3430, max= 4758, avg=4094.00, stdev=939.04, samples=2 00:17:08.596 lat (msec) : 10=1.64%, 20=90.90%, 50=7.34%, 100=0.12% 00:17:08.596 cpu : usr=4.18%, sys=9.85%, ctx=392, majf=0, minf=5 00:17:08.596 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:08.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.596 issued rwts: total=3709,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.596 job3: (groupid=0, jobs=1): err= 0: pid=81618: Wed May 15 13:35:21 2024 00:17:08.596 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:17:08.596 slat (usec): min=6, max=5130, avg=171.75, stdev=651.48 00:17:08.596 clat (usec): min=3305, max=30483, avg=21558.35, stdev=4184.78 00:17:08.596 lat (usec): min=3314, max=30494, avg=21730.10, stdev=4163.40 00:17:08.596 clat percentiles (usec): 00:17:08.596 | 1.00th=[ 6325], 5.00th=[16450], 10.00th=[17695], 20.00th=[19530], 00:17:08.596 | 30.00th=[20317], 40.00th=[20579], 50.00th=[20841], 60.00th=[21103], 00:17:08.596 | 70.00th=[22152], 80.00th=[25560], 90.00th=[27657], 95.00th=[29492], 00:17:08.596 | 99.00th=[30016], 99.50th=[30540], 99.90th=[30540], 99.95th=[30540], 00:17:08.596 | 99.99th=[30540] 00:17:08.596 write: IOPS=2555, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:17:08.596 slat (usec): min=6, max=17370, avg=211.42, stdev=1323.24 00:17:08.596 clat (usec): min=1099, max=73685, avg=26959.69, stdev=16328.29 00:17:08.596 lat (usec): min=3299, max=73728, avg=27171.10, stdev=16409.52 00:17:08.596 clat percentiles (usec): 00:17:08.596 | 1.00th=[13829], 5.00th=[16188], 10.00th=[16581], 20.00th=[16909], 00:17:08.596 | 30.00th=[17433], 40.00th=[19268], 50.00th=[21365], 60.00th=[21627], 00:17:08.596 | 70.00th=[21890], 80.00th=[28705], 90.00th=[62653], 95.00th=[67634], 00:17:08.596 | 99.00th=[69731], 99.50th=[69731], 99.90th=[73925], 99.95th=[73925], 00:17:08.596 | 99.99th=[73925] 00:17:08.596 bw ( KiB/s): min= 8192, max=12288, per=18.99%, avg=10240.00, stdev=2896.31, samples=2 00:17:08.596 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:17:08.596 lat (msec) : 2=0.02%, 4=0.25%, 10=0.62%, 20=33.80%, 50=57.55% 00:17:08.596 lat (msec) : 100=7.75% 00:17:08.596 cpu : usr=1.80%, sys=5.89%, ctx=271, majf=0, minf=5 00:17:08.596 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:17:08.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:08.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:08.596 issued rwts: total=2560,2561,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:08.596 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:08.596 00:17:08.596 Run status group 0 (all jobs): 00:17:08.596 READ: bw=49.8MiB/s (52.2MB/s), 6120KiB/s-19.6MiB/s (6266kB/s-20.5MB/s), io=50.1MiB (52.6MB), run=1002-1006msec 00:17:08.596 WRITE: bw=52.7MiB/s (55.2MB/s), 7116KiB/s-20.0MiB/s (7286kB/s-20.9MB/s), io=53.0MiB (55.6MB), run=1002-1006msec 00:17:08.596 00:17:08.596 Disk stats (read/write): 00:17:08.596 nvme0n1: ios=1200/1536, merge=0/0, ticks=10778/23619, in_queue=34397, util=87.17% 00:17:08.596 nvme0n2: ios=4145/4385, merge=0/0, ticks=12201/12491, in_queue=24692, util=86.94% 00:17:08.596 nvme0n3: ios=3072/3351, merge=0/0, ticks=26995/22232, in_queue=49227, util=88.14% 00:17:08.596 nvme0n4: ios=2048/2163, merge=0/0, ticks=11473/14416, in_queue=25889, util=89.53% 00:17:08.596 13:35:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:08.596 [global] 00:17:08.596 thread=1 00:17:08.596 invalidate=1 00:17:08.596 rw=randwrite 00:17:08.596 time_based=1 00:17:08.596 runtime=1 00:17:08.596 ioengine=libaio 00:17:08.596 direct=1 00:17:08.597 bs=4096 00:17:08.597 iodepth=128 00:17:08.597 norandommap=0 00:17:08.597 numjobs=1 00:17:08.597 00:17:08.597 verify_dump=1 00:17:08.597 verify_backlog=512 00:17:08.597 verify_state_save=0 00:17:08.597 do_verify=1 00:17:08.597 verify=crc32c-intel 00:17:08.597 [job0] 00:17:08.597 filename=/dev/nvme0n1 00:17:08.597 [job1] 00:17:08.597 filename=/dev/nvme0n2 00:17:08.597 [job2] 00:17:08.597 filename=/dev/nvme0n3 00:17:08.597 [job3] 00:17:08.597 filename=/dev/nvme0n4 00:17:08.597 Could not set queue depth (nvme0n1) 00:17:08.597 Could not set queue depth (nvme0n2) 00:17:08.597 Could not set queue depth (nvme0n3) 00:17:08.597 Could not set queue depth (nvme0n4) 00:17:08.597 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:08.597 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:08.597 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:08.597 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:08.597 fio-3.35 00:17:08.597 Starting 4 threads 00:17:09.970 00:17:09.970 job0: (groupid=0, jobs=1): err= 0: pid=81682: Wed May 15 13:35:22 2024 00:17:09.970 read: IOPS=4588, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:17:09.970 slat (usec): min=6, max=8405, avg=106.70, stdev=717.76 00:17:09.970 clat (usec): min=2587, max=25326, avg=14620.16, stdev=1944.60 00:17:09.970 lat (usec): min=2602, max=28751, avg=14726.86, stdev=1969.77 00:17:09.970 clat percentiles (usec): 00:17:09.970 | 1.00th=[ 8455], 5.00th=[11994], 10.00th=[13304], 20.00th=[13829], 00:17:09.970 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:17:09.970 | 70.00th=[15008], 80.00th=[15401], 90.00th=[16712], 95.00th=[17171], 00:17:09.970 | 99.00th=[21103], 99.50th=[21627], 99.90th=[23200], 99.95th=[23200], 00:17:09.970 | 99.99th=[25297] 00:17:09.970 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:17:09.970 slat (usec): min=5, max=11025, avg=104.05, stdev=702.15 00:17:09.970 clat (usec): min=6398, max=19448, avg=12998.64, stdev=1418.97 00:17:09.970 lat (usec): min=8723, max=19462, avg=13102.69, stdev=1276.65 00:17:09.970 clat percentiles (usec): 00:17:09.970 | 1.00th=[ 7767], 5.00th=[11338], 10.00th=[11731], 20.00th=[11994], 00:17:09.970 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:17:09.970 | 70.00th=[13435], 80.00th=[13698], 90.00th=[14091], 95.00th=[14615], 00:17:09.970 | 99.00th=[19530], 99.50th=[19530], 99.90th=[19530], 99.95th=[19530], 00:17:09.970 | 99.99th=[19530] 00:17:09.970 bw ( KiB/s): min=17056, max=19808, per=27.10%, avg=18432.00, stdev=1945.96, samples=2 00:17:09.970 iops : min= 4264, max= 4952, avg=4608.00, stdev=486.49, samples=2 00:17:09.970 lat (msec) : 4=0.02%, 10=3.33%, 20=95.84%, 50=0.80% 00:17:09.970 cpu : usr=3.29%, sys=8.38%, ctx=228, majf=0, minf=5 00:17:09.970 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:09.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:09.970 issued rwts: total=4602,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.970 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:09.970 job1: (groupid=0, jobs=1): err= 0: pid=81683: Wed May 15 13:35:22 2024 00:17:09.970 read: IOPS=4400, BW=17.2MiB/s (18.0MB/s)(17.3MiB/1005msec) 00:17:09.970 slat (usec): min=3, max=13007, avg=106.27, stdev=673.31 00:17:09.970 clat (usec): min=1326, max=31655, avg=14304.97, stdev=2955.26 00:17:09.970 lat (usec): min=6382, max=31667, avg=14411.24, stdev=2977.11 00:17:09.970 clat percentiles (usec): 00:17:09.970 | 1.00th=[ 6980], 5.00th=[ 9765], 10.00th=[12780], 20.00th=[13304], 00:17:09.970 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14222], 00:17:09.970 | 70.00th=[14615], 80.00th=[15139], 90.00th=[15533], 95.00th=[16319], 00:17:09.970 | 99.00th=[28443], 99.50th=[29754], 99.90th=[31589], 99.95th=[31589], 00:17:09.970 | 99.99th=[31589] 00:17:09.970 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:17:09.970 slat (usec): min=5, max=12886, avg=109.01, stdev=709.24 00:17:09.970 clat (usec): min=4185, max=31613, avg=13936.37, stdev=2606.85 00:17:09.970 lat (usec): min=4200, max=31621, avg=14045.39, stdev=2542.86 00:17:09.970 clat percentiles (usec): 00:17:09.970 | 1.00th=[ 7046], 5.00th=[11338], 10.00th=[11994], 20.00th=[12518], 00:17:09.970 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13566], 60.00th=[14091], 00:17:09.970 | 70.00th=[14222], 80.00th=[15401], 90.00th=[17433], 95.00th=[18482], 00:17:09.970 | 99.00th=[22152], 99.50th=[27657], 99.90th=[27657], 99.95th=[27657], 00:17:09.970 | 99.99th=[31589] 00:17:09.970 bw ( KiB/s): min=17392, max=19472, per=27.10%, avg=18432.00, stdev=1470.78, samples=2 00:17:09.970 iops : min= 4348, max= 4868, avg=4608.00, stdev=367.70, samples=2 00:17:09.970 lat (msec) : 2=0.01%, 10=4.76%, 20=91.71%, 50=3.52% 00:17:09.970 cpu : usr=3.69%, sys=8.47%, ctx=382, majf=0, minf=4 00:17:09.970 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:09.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:09.970 issued rwts: total=4422,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.970 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:09.970 job2: (groupid=0, jobs=1): err= 0: pid=81684: Wed May 15 13:35:22 2024 00:17:09.970 read: IOPS=3934, BW=15.4MiB/s (16.1MB/s)(15.4MiB/1005msec) 00:17:09.970 slat (usec): min=6, max=7760, avg=126.48, stdev=694.86 00:17:09.970 clat (usec): min=1005, max=25434, avg=15715.01, stdev=2213.24 00:17:09.970 lat (usec): min=4136, max=25452, avg=15841.49, stdev=2261.58 00:17:09.970 clat percentiles (usec): 00:17:09.970 | 1.00th=[ 6587], 5.00th=[12125], 10.00th=[13829], 20.00th=[14877], 00:17:09.970 | 30.00th=[15139], 40.00th=[15533], 50.00th=[15926], 60.00th=[16188], 00:17:09.971 | 70.00th=[16450], 80.00th=[16712], 90.00th=[17433], 95.00th=[18744], 00:17:09.971 | 99.00th=[21627], 99.50th=[22938], 99.90th=[24249], 99.95th=[24511], 00:17:09.971 | 99.99th=[25560] 00:17:09.971 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:17:09.971 slat (usec): min=5, max=7289, avg=116.01, stdev=601.12 00:17:09.971 clat (usec): min=7367, max=27626, avg=15824.90, stdev=2065.23 00:17:09.971 lat (usec): min=7392, max=27638, avg=15940.91, stdev=2133.65 00:17:09.971 clat percentiles (usec): 00:17:09.971 | 1.00th=[10159], 5.00th=[13304], 10.00th=[14222], 20.00th=[14877], 00:17:09.971 | 30.00th=[15139], 40.00th=[15533], 50.00th=[15664], 60.00th=[15795], 00:17:09.971 | 70.00th=[15926], 80.00th=[16188], 90.00th=[18220], 95.00th=[20317], 00:17:09.971 | 99.00th=[22676], 99.50th=[25035], 99.90th=[26870], 99.95th=[27657], 00:17:09.971 | 99.99th=[27657] 00:17:09.971 bw ( KiB/s): min=16384, max=16384, per=24.09%, avg=16384.00, stdev= 0.00, samples=2 00:17:09.971 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:17:09.971 lat (msec) : 2=0.01%, 10=1.27%, 20=94.00%, 50=4.72% 00:17:09.971 cpu : usr=2.69%, sys=8.57%, ctx=437, majf=0, minf=5 00:17:09.971 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:09.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:09.971 issued rwts: total=3954,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.971 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:09.971 job3: (groupid=0, jobs=1): err= 0: pid=81685: Wed May 15 13:35:22 2024 00:17:09.971 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:17:09.971 slat (usec): min=6, max=6024, avg=134.84, stdev=694.24 00:17:09.971 clat (usec): min=11903, max=23917, avg=17588.50, stdev=2399.78 00:17:09.971 lat (usec): min=14714, max=23927, avg=17723.34, stdev=2319.03 00:17:09.971 clat percentiles (usec): 00:17:09.971 | 1.00th=[12780], 5.00th=[15401], 10.00th=[15664], 20.00th=[15795], 00:17:09.971 | 30.00th=[16057], 40.00th=[16450], 50.00th=[16712], 60.00th=[17433], 00:17:09.971 | 70.00th=[17957], 80.00th=[19268], 90.00th=[22676], 95.00th=[23200], 00:17:09.971 | 99.00th=[23462], 99.50th=[23987], 99.90th=[23987], 99.95th=[23987], 00:17:09.971 | 99.99th=[23987] 00:17:09.971 write: IOPS=3761, BW=14.7MiB/s (15.4MB/s)(14.8MiB/1004msec); 0 zone resets 00:17:09.971 slat (usec): min=5, max=8100, avg=129.94, stdev=656.38 00:17:09.971 clat (usec): min=775, max=28372, avg=16831.75, stdev=3352.00 00:17:09.971 lat (usec): min=5440, max=28394, avg=16961.69, stdev=3304.94 00:17:09.971 clat percentiles (usec): 00:17:09.971 | 1.00th=[11076], 5.00th=[14353], 10.00th=[14615], 20.00th=[14877], 00:17:09.971 | 30.00th=[15270], 40.00th=[15533], 50.00th=[15664], 60.00th=[16188], 00:17:09.971 | 70.00th=[16712], 80.00th=[18220], 90.00th=[22152], 95.00th=[23462], 00:17:09.971 | 99.00th=[28181], 99.50th=[28181], 99.90th=[28443], 99.95th=[28443], 00:17:09.971 | 99.99th=[28443] 00:17:09.971 bw ( KiB/s): min=12808, max=16384, per=21.46%, avg=14596.00, stdev=2528.61, samples=2 00:17:09.971 iops : min= 3202, max= 4096, avg=3649.00, stdev=632.15, samples=2 00:17:09.971 lat (usec) : 1000=0.01% 00:17:09.971 lat (msec) : 10=0.43%, 20=84.35%, 50=15.20% 00:17:09.971 cpu : usr=3.09%, sys=8.18%, ctx=272, majf=0, minf=7 00:17:09.971 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:17:09.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:09.971 issued rwts: total=3584,3777,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.971 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:09.971 00:17:09.971 Run status group 0 (all jobs): 00:17:09.971 READ: bw=64.4MiB/s (67.5MB/s), 13.9MiB/s-17.9MiB/s (14.6MB/s-18.8MB/s), io=64.7MiB (67.8MB), run=1003-1005msec 00:17:09.971 WRITE: bw=66.4MiB/s (69.6MB/s), 14.7MiB/s-17.9MiB/s (15.4MB/s-18.8MB/s), io=66.8MiB (70.0MB), run=1003-1005msec 00:17:09.971 00:17:09.971 Disk stats (read/write): 00:17:09.971 nvme0n1: ios=3886/4096, merge=0/0, ticks=53251/50677, in_queue=103928, util=87.88% 00:17:09.971 nvme0n2: ios=3633/4095, merge=0/0, ticks=49628/54455, in_queue=104083, util=88.57% 00:17:09.971 nvme0n3: ios=3294/3584, merge=0/0, ticks=25793/26183, in_queue=51976, util=89.47% 00:17:09.971 nvme0n4: ios=3072/3296, merge=0/0, ticks=12667/12557, in_queue=25224, util=89.72% 00:17:09.971 13:35:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:09.971 13:35:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=81698 00:17:09.971 13:35:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:09.971 13:35:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:09.971 [global] 00:17:09.971 thread=1 00:17:09.971 invalidate=1 00:17:09.971 rw=read 00:17:09.971 time_based=1 00:17:09.971 runtime=10 00:17:09.971 ioengine=libaio 00:17:09.971 direct=1 00:17:09.971 bs=4096 00:17:09.971 iodepth=1 00:17:09.971 norandommap=1 00:17:09.971 numjobs=1 00:17:09.971 00:17:09.971 [job0] 00:17:09.971 filename=/dev/nvme0n1 00:17:09.971 [job1] 00:17:09.971 filename=/dev/nvme0n2 00:17:09.971 [job2] 00:17:09.971 filename=/dev/nvme0n3 00:17:09.971 [job3] 00:17:09.971 filename=/dev/nvme0n4 00:17:09.971 Could not set queue depth (nvme0n1) 00:17:09.971 Could not set queue depth (nvme0n2) 00:17:09.971 Could not set queue depth (nvme0n3) 00:17:09.971 Could not set queue depth (nvme0n4) 00:17:09.971 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:09.971 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:09.971 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:09.971 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:09.971 fio-3.35 00:17:09.971 Starting 4 threads 00:17:13.251 13:35:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:13.251 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=36913152, buflen=4096 00:17:13.251 fio: pid=81741, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:13.251 13:35:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:13.251 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=43249664, buflen=4096 00:17:13.251 fio: pid=81740, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:13.251 13:35:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:13.251 13:35:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:13.510 fio: pid=81738, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:13.510 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=9072640, buflen=4096 00:17:13.510 13:35:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:13.510 13:35:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:13.769 fio: pid=81739, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:13.769 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=12898304, buflen=4096 00:17:14.029 00:17:14.029 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=81738: Wed May 15 13:35:26 2024 00:17:14.029 read: IOPS=5385, BW=21.0MiB/s (22.1MB/s)(72.7MiB/3454msec) 00:17:14.029 slat (usec): min=8, max=10796, avg=12.50, stdev=131.51 00:17:14.029 clat (usec): min=114, max=7670, avg=172.08, stdev=77.03 00:17:14.029 lat (usec): min=138, max=11008, avg=184.58, stdev=153.68 00:17:14.029 clat percentiles (usec): 00:17:14.029 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:17:14.029 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:17:14.029 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 202], 00:17:14.029 | 99.00th=[ 239], 99.50th=[ 258], 99.90th=[ 338], 99.95th=[ 371], 00:17:14.029 | 99.99th=[ 6849] 00:17:14.029 bw ( KiB/s): min=20040, max=23648, per=35.69%, avg=21936.00, stdev=1246.84, samples=6 00:17:14.029 iops : min= 5010, max= 5912, avg=5484.00, stdev=311.71, samples=6 00:17:14.029 lat (usec) : 250=99.37%, 500=0.60% 00:17:14.029 lat (msec) : 2=0.02%, 10=0.01% 00:17:14.029 cpu : usr=1.07%, sys=5.62%, ctx=18628, majf=0, minf=1 00:17:14.029 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:14.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.029 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.029 issued rwts: total=18600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.029 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:14.029 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=81739: Wed May 15 13:35:26 2024 00:17:14.029 read: IOPS=5202, BW=20.3MiB/s (21.3MB/s)(76.3MiB/3755msec) 00:17:14.029 slat (usec): min=8, max=19982, avg=14.72, stdev=208.06 00:17:14.029 clat (usec): min=4, max=9134, avg=176.40, stdev=78.69 00:17:14.029 lat (usec): min=133, max=20189, avg=191.12, stdev=222.94 00:17:14.029 clat percentiles (usec): 00:17:14.029 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 157], 00:17:14.029 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:17:14.029 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 202], 95.00th=[ 217], 00:17:14.029 | 99.00th=[ 258], 99.50th=[ 326], 99.90th=[ 717], 99.95th=[ 1401], 00:17:14.029 | 99.99th=[ 1893] 00:17:14.029 bw ( KiB/s): min=19232, max=23400, per=33.67%, avg=20695.71, stdev=1386.17, samples=7 00:17:14.029 iops : min= 4808, max= 5850, avg=5173.86, stdev=346.60, samples=7 00:17:14.029 lat (usec) : 10=0.01%, 250=98.75%, 500=1.04%, 750=0.11%, 1000=0.03% 00:17:14.029 lat (msec) : 2=0.06%, 10=0.01% 00:17:14.029 cpu : usr=1.44%, sys=5.38%, ctx=19607, majf=0, minf=1 00:17:14.029 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:14.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.029 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.029 issued rwts: total=19534,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.029 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:14.029 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=81740: Wed May 15 13:35:26 2024 00:17:14.029 read: IOPS=3331, BW=13.0MiB/s (13.6MB/s)(41.2MiB/3170msec) 00:17:14.029 slat (usec): min=8, max=9849, avg=15.44, stdev=123.84 00:17:14.029 clat (usec): min=122, max=1558, avg=283.13, stdev=57.49 00:17:14.029 lat (usec): min=135, max=10046, avg=298.57, stdev=135.48 00:17:14.029 clat percentiles (usec): 00:17:14.029 | 1.00th=[ 155], 5.00th=[ 180], 10.00th=[ 198], 20.00th=[ 255], 00:17:14.029 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:17:14.029 | 70.00th=[ 302], 80.00th=[ 318], 90.00th=[ 343], 95.00th=[ 375], 00:17:14.029 | 99.00th=[ 424], 99.50th=[ 437], 99.90th=[ 490], 99.95th=[ 979], 00:17:14.029 | 99.99th=[ 1188] 00:17:14.029 bw ( KiB/s): min=12128, max=13824, per=21.30%, avg=13092.00, stdev=716.40, samples=6 00:17:14.029 iops : min= 3032, max= 3456, avg=3273.00, stdev=179.10, samples=6 00:17:14.029 lat (usec) : 250=16.97%, 500=82.94%, 750=0.03%, 1000=0.02% 00:17:14.029 lat (msec) : 2=0.04% 00:17:14.029 cpu : usr=1.10%, sys=4.26%, ctx=10564, majf=0, minf=1 00:17:14.029 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:14.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.029 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.029 issued rwts: total=10560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.029 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:14.029 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=81741: Wed May 15 13:35:26 2024 00:17:14.029 read: IOPS=3077, BW=12.0MiB/s (12.6MB/s)(35.2MiB/2929msec) 00:17:14.029 slat (usec): min=8, max=3600, avg=16.17, stdev=38.17 00:17:14.029 clat (usec): min=4, max=11661, avg=306.83, stdev=128.68 00:17:14.029 lat (usec): min=165, max=11679, avg=323.00, stdev=133.94 00:17:14.029 clat percentiles (usec): 00:17:14.029 | 1.00th=[ 237], 5.00th=[ 255], 10.00th=[ 265], 20.00th=[ 273], 00:17:14.029 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 302], 00:17:14.029 | 70.00th=[ 314], 80.00th=[ 330], 90.00th=[ 379], 95.00th=[ 408], 00:17:14.029 | 99.00th=[ 453], 99.50th=[ 474], 99.90th=[ 523], 99.95th=[ 611], 00:17:14.029 | 99.99th=[11600] 00:17:14.029 bw ( KiB/s): min=11568, max=13264, per=20.05%, avg=12324.80, stdev=743.00, samples=5 00:17:14.029 iops : min= 2892, max= 3316, avg=3081.20, stdev=185.75, samples=5 00:17:14.029 lat (usec) : 10=0.01%, 250=3.01%, 500=96.74%, 750=0.21%, 1000=0.01% 00:17:14.029 lat (msec) : 20=0.01% 00:17:14.029 cpu : usr=1.02%, sys=4.71%, ctx=9018, majf=0, minf=2 00:17:14.029 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:14.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.029 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.029 issued rwts: total=9013,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.029 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:14.029 00:17:14.029 Run status group 0 (all jobs): 00:17:14.029 READ: bw=60.0MiB/s (62.9MB/s), 12.0MiB/s-21.0MiB/s (12.6MB/s-22.1MB/s), io=225MiB (236MB), run=2929-3755msec 00:17:14.029 00:17:14.029 Disk stats (read/write): 00:17:14.029 nvme0n1: ios=18065/0, merge=0/0, ticks=3117/0, in_queue=3117, util=94.62% 00:17:14.029 nvme0n2: ios=18587/0, merge=0/0, ticks=3327/0, in_queue=3327, util=94.67% 00:17:14.029 nvme0n3: ios=10266/0, merge=0/0, ticks=2944/0, in_queue=2944, util=96.26% 00:17:14.029 nvme0n4: ios=8782/0, merge=0/0, ticks=2704/0, in_queue=2704, util=96.34% 00:17:14.029 13:35:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:14.029 13:35:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:14.287 13:35:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:14.287 13:35:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:14.545 13:35:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:14.545 13:35:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:14.804 13:35:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:14.804 13:35:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:15.062 13:35:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:15.062 13:35:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:15.363 13:35:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:15.363 13:35:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 81698 00:17:15.363 13:35:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:15.363 13:35:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:15.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.363 13:35:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:15.363 13:35:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:17:15.363 13:35:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:17:15.363 13:35:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:15.363 13:35:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:15.363 13:35:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:17:15.363 nvmf hotplug test: fio failed as expected 00:17:15.363 13:35:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:17:15.363 13:35:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:15.363 13:35:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:15.363 13:35:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:15.620 rmmod nvme_tcp 00:17:15.620 rmmod nvme_fabrics 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 81312 ']' 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 81312 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 81312 ']' 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 81312 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81312 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81312' 00:17:15.620 killing process with pid 81312 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 81312 00:17:15.620 [2024-05-15 13:35:28.630608] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:15.620 13:35:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 81312 00:17:15.879 13:35:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:15.879 13:35:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:15.879 13:35:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:15.879 13:35:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:15.879 13:35:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:15.879 13:35:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.879 13:35:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:15.879 13:35:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.879 13:35:28 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:15.879 00:17:15.879 real 0m19.910s 00:17:15.879 user 1m14.815s 00:17:15.879 sys 0m10.819s 00:17:15.879 13:35:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:15.879 13:35:28 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.879 ************************************ 00:17:15.879 END TEST nvmf_fio_target 00:17:15.879 ************************************ 00:17:15.879 13:35:28 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:15.879 13:35:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:15.879 13:35:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:15.879 13:35:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:15.879 ************************************ 00:17:15.879 START TEST nvmf_bdevio 00:17:15.879 ************************************ 00:17:15.879 13:35:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:16.138 * Looking for test storage... 00:17:16.138 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:16.138 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:16.139 Cannot find device "nvmf_tgt_br" 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:16.139 Cannot find device "nvmf_tgt_br2" 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:16.139 Cannot find device "nvmf_tgt_br" 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:16.139 Cannot find device "nvmf_tgt_br2" 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:16.139 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:16.139 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:17:16.139 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:16.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:17:16.398 00:17:16.398 --- 10.0.0.2 ping statistics --- 00:17:16.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.398 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:16.398 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:16.398 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:17:16.398 00:17:16.398 --- 10.0.0.3 ping statistics --- 00:17:16.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.398 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:16.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:17:16.398 00:17:16.398 --- 10.0.0.1 ping statistics --- 00:17:16.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.398 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=82010 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 82010 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 82010 ']' 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:16.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:16.398 13:35:29 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:16.656 [2024-05-15 13:35:29.517314] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:17:16.656 [2024-05-15 13:35:29.517401] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.656 [2024-05-15 13:35:29.655378] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:16.656 [2024-05-15 13:35:29.678119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:16.656 [2024-05-15 13:35:29.738116] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.656 [2024-05-15 13:35:29.738184] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.656 [2024-05-15 13:35:29.738199] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.656 [2024-05-15 13:35:29.738212] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.656 [2024-05-15 13:35:29.738224] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.656 [2024-05-15 13:35:29.738345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:16.656 [2024-05-15 13:35:29.738392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:16.656 [2024-05-15 13:35:29.738512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:16.656 [2024-05-15 13:35:29.738512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:17.588 13:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:17.588 13:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:17:17.588 13:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:17.588 13:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:17.588 13:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:17.588 13:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.588 13:35:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:17.588 13:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.588 13:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:17.588 [2024-05-15 13:35:30.641315] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:17.588 13:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.589 13:35:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:17.589 13:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.589 13:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:17.589 Malloc0 00:17:17.846 13:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.846 13:35:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:17.846 13:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.846 13:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:17.846 13:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.846 13:35:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:17.846 13:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.846 13:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:17.846 13:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.846 13:35:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:17.846 13:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.846 13:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:17.846 [2024-05-15 13:35:30.706800] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:17.846 [2024-05-15 13:35:30.707769] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:17.846 13:35:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.846 13:35:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:17.846 13:35:30 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:17.846 13:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:17.846 13:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:17.846 13:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:17.846 13:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:17.846 { 00:17:17.846 "params": { 00:17:17.846 "name": "Nvme$subsystem", 00:17:17.846 "trtype": "$TEST_TRANSPORT", 00:17:17.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:17.846 "adrfam": "ipv4", 00:17:17.846 "trsvcid": "$NVMF_PORT", 00:17:17.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:17.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:17.846 "hdgst": ${hdgst:-false}, 00:17:17.846 "ddgst": ${ddgst:-false} 00:17:17.846 }, 00:17:17.846 "method": "bdev_nvme_attach_controller" 00:17:17.846 } 00:17:17.846 EOF 00:17:17.846 )") 00:17:17.846 13:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:17.846 13:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:17.846 13:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:17.846 13:35:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:17.846 "params": { 00:17:17.846 "name": "Nvme1", 00:17:17.846 "trtype": "tcp", 00:17:17.846 "traddr": "10.0.0.2", 00:17:17.846 "adrfam": "ipv4", 00:17:17.846 "trsvcid": "4420", 00:17:17.846 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:17.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:17.846 "hdgst": false, 00:17:17.846 "ddgst": false 00:17:17.846 }, 00:17:17.846 "method": "bdev_nvme_attach_controller" 00:17:17.846 }' 00:17:17.846 [2024-05-15 13:35:30.761307] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:17:17.846 [2024-05-15 13:35:30.761419] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82046 ] 00:17:17.846 [2024-05-15 13:35:30.890901] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:17.846 [2024-05-15 13:35:30.912472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:18.122 [2024-05-15 13:35:30.972986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.122 [2024-05-15 13:35:30.973089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.122 [2024-05-15 13:35:30.973096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.122 I/O targets: 00:17:18.122 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:18.122 00:17:18.122 00:17:18.123 CUnit - A unit testing framework for C - Version 2.1-3 00:17:18.123 http://cunit.sourceforge.net/ 00:17:18.123 00:17:18.123 00:17:18.123 Suite: bdevio tests on: Nvme1n1 00:17:18.123 Test: blockdev write read block ...passed 00:17:18.123 Test: blockdev write zeroes read block ...passed 00:17:18.123 Test: blockdev write zeroes read no split ...passed 00:17:18.123 Test: blockdev write zeroes read split ...passed 00:17:18.123 Test: blockdev write zeroes read split partial ...passed 00:17:18.123 Test: blockdev reset ...[2024-05-15 13:35:31.176379] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:18.123 [2024-05-15 13:35:31.176523] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf089d0 (9): Bad file descriptor 00:17:18.123 [2024-05-15 13:35:31.187507] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:18.123 passed 00:17:18.123 Test: blockdev write read 8 blocks ...passed 00:17:18.123 Test: blockdev write read size > 128k ...passed 00:17:18.123 Test: blockdev write read invalid size ...passed 00:17:18.123 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:18.123 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:18.123 Test: blockdev write read max offset ...passed 00:17:18.123 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:18.123 Test: blockdev writev readv 8 blocks ...passed 00:17:18.123 Test: blockdev writev readv 30 x 1block ...passed 00:17:18.123 Test: blockdev writev readv block ...passed 00:17:18.123 Test: blockdev writev readv size > 128k ...passed 00:17:18.124 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:18.124 Test: blockdev comparev and writev ...[2024-05-15 13:35:31.197621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:18.124 [2024-05-15 13:35:31.197709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:18.124 [2024-05-15 13:35:31.197749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:18.124 [2024-05-15 13:35:31.197782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:18.124 [2024-05-15 13:35:31.198191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:18.124 [2024-05-15 13:35:31.198259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:18.124 [2024-05-15 13:35:31.198295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:18.124 [2024-05-15 13:35:31.198317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:18.124 [2024-05-15 13:35:31.198734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:18.124 [2024-05-15 13:35:31.198785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:18.124 [2024-05-15 13:35:31.198816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:18.124 [2024-05-15 13:35:31.198835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:18.124 [2024-05-15 13:35:31.199229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:18.124 [2024-05-15 13:35:31.199300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:18.124 [2024-05-15 13:35:31.199335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:18.124 [2024-05-15 13:35:31.199356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:18.124 passed 00:17:18.124 Test: blockdev nvme passthru rw ...passed 00:17:18.124 Test: blockdev nvme passthru vendor specific ...[2024-05-15 13:35:31.200271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:18.124 [2024-05-15 13:35:31.200328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:18.124 [2024-05-15 13:35:31.200478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:18.124 [2024-05-15 13:35:31.200518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:18.124 [2024-05-15 13:35:31.200660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:18.124 [2024-05-15 13:35:31.200708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:18.124 [2024-05-15 13:35:31.200851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:18.124 [2024-05-15 13:35:31.200896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:18.124 passed 00:17:18.124 Test: blockdev nvme admin passthru ...passed 00:17:18.124 Test: blockdev copy ...passed 00:17:18.124 00:17:18.124 Run Summary: Type Total Ran Passed Failed Inactive 00:17:18.124 suites 1 1 n/a 0 0 00:17:18.124 tests 23 23 23 0 0 00:17:18.124 asserts 152 152 152 0 n/a 00:17:18.124 00:17:18.124 Elapsed time = 0.156 seconds 00:17:18.387 13:35:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.387 13:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.387 13:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:18.387 13:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.387 13:35:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:18.387 13:35:31 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:18.387 13:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:18.387 13:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:18.387 13:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:18.387 13:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:18.387 13:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:18.387 13:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:18.387 rmmod nvme_tcp 00:17:18.387 rmmod nvme_fabrics 00:17:18.387 13:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:18.387 13:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:18.387 13:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:18.387 13:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 82010 ']' 00:17:18.387 13:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 82010 00:17:18.387 13:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 82010 ']' 00:17:18.387 13:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 82010 00:17:18.387 13:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:17:18.387 13:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:18.644 13:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82010 00:17:18.644 13:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:17:18.644 killing process with pid 82010 00:17:18.644 13:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:17:18.644 13:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82010' 00:17:18.644 13:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 82010 00:17:18.644 13:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 82010 00:17:18.644 [2024-05-15 13:35:31.503466] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:18.644 13:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:18.644 13:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:18.644 13:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:18.644 13:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:18.644 13:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:18.644 13:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.644 13:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.644 13:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.903 13:35:31 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:18.903 00:17:18.903 real 0m2.831s 00:17:18.903 user 0m9.104s 00:17:18.903 sys 0m0.814s 00:17:18.903 13:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:18.903 13:35:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:18.903 ************************************ 00:17:18.903 END TEST nvmf_bdevio 00:17:18.903 ************************************ 00:17:18.903 13:35:31 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:18.903 13:35:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:18.903 13:35:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:18.903 13:35:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:18.903 ************************************ 00:17:18.903 START TEST nvmf_auth_target 00:17:18.903 ************************************ 00:17:18.903 13:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:18.903 * Looking for test storage... 00:17:18.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:18.903 13:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:18.903 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:18.903 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.903 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.903 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.903 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.903 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:18.904 Cannot find device "nvmf_tgt_br" 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:18.904 Cannot find device "nvmf_tgt_br2" 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:18.904 13:35:31 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:19.162 Cannot find device "nvmf_tgt_br" 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:19.162 Cannot find device "nvmf_tgt_br2" 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:19.162 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:19.162 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:19.162 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:19.419 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:19.419 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:19.419 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:19.419 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:19.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:19.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:17:19.419 00:17:19.419 --- 10.0.0.2 ping statistics --- 00:17:19.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.419 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:17:19.419 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:19.419 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:19.419 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:17:19.419 00:17:19.419 --- 10.0.0.3 ping statistics --- 00:17:19.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.419 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:19.419 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:19.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:19.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:17:19.419 00:17:19.419 --- 10.0.0.1 ping statistics --- 00:17:19.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.419 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:17:19.419 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:19.419 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:17:19.419 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:19.419 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:19.419 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:19.419 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:19.419 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:19.419 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:19.419 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:19.419 13:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:17:19.419 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:19.419 13:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:19.419 13:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.419 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=82217 00:17:19.419 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:19.419 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 82217 00:17:19.419 13:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 82217 ']' 00:17:19.420 13:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.420 13:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:19.420 13:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.420 13:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:19.420 13:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=82247 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1a770c50074985a1948f50f8c906aa27683f372f0d3ec365 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.do7 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1a770c50074985a1948f50f8c906aa27683f372f0d3ec365 0 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1a770c50074985a1948f50f8c906aa27683f372f0d3ec365 0 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1a770c50074985a1948f50f8c906aa27683f372f0d3ec365 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:19.677 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:19.935 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.do7 00:17:19.935 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.do7 00:17:19.935 13:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.do7 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e800beaf1e41784e0ed89c38f57d1524 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.HGO 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e800beaf1e41784e0ed89c38f57d1524 1 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e800beaf1e41784e0ed89c38f57d1524 1 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e800beaf1e41784e0ed89c38f57d1524 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.HGO 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.HGO 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.HGO 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b1f67cb3f9bb2a873295fe725e43cc5ac024a5f60cb0782e 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.M3I 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b1f67cb3f9bb2a873295fe725e43cc5ac024a5f60cb0782e 2 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b1f67cb3f9bb2a873295fe725e43cc5ac024a5f60cb0782e 2 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b1f67cb3f9bb2a873295fe725e43cc5ac024a5f60cb0782e 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.M3I 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.M3I 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.M3I 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0a04b00f4f2c840eb40873fd97c741ad773805ec75d298b5135741e7fe83da47 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.xW6 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0a04b00f4f2c840eb40873fd97c741ad773805ec75d298b5135741e7fe83da47 3 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0a04b00f4f2c840eb40873fd97c741ad773805ec75d298b5135741e7fe83da47 3 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0a04b00f4f2c840eb40873fd97c741ad773805ec75d298b5135741e7fe83da47 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:19.936 13:35:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:19.936 13:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.xW6 00:17:19.936 13:35:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.xW6 00:17:19.936 13:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.xW6 00:17:19.936 13:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 82217 00:17:19.936 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 82217 ']' 00:17:19.936 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.936 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:19.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.936 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.936 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:19.936 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.500 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:20.500 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:17:20.500 13:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 82247 /var/tmp/host.sock 00:17:20.501 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 82247 ']' 00:17:20.501 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:17:20.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:20.501 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:20.501 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:20.501 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:20.501 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.758 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:20.758 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:17:20.758 13:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:17:20.758 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.758 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.758 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.758 13:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:17:20.758 13:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.do7 00:17:20.758 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.758 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.758 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.758 13:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.do7 00:17:20.758 13:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.do7 00:17:21.014 13:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:17:21.014 13:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.HGO 00:17:21.014 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.014 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.014 13:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.014 13:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.HGO 00:17:21.014 13:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.HGO 00:17:21.322 13:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:17:21.322 13:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.M3I 00:17:21.322 13:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.322 13:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.322 13:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.322 13:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.M3I 00:17:21.322 13:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.M3I 00:17:21.579 13:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:17:21.579 13:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.xW6 00:17:21.579 13:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.579 13:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.579 13:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.579 13:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.xW6 00:17:21.579 13:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.xW6 00:17:21.837 13:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:17:21.837 13:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.837 13:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:21.837 13:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:21.837 13:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:22.094 13:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:17:22.094 13:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:22.094 13:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:22.094 13:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:22.094 13:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:22.094 13:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key0 00:17:22.094 13:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.094 13:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.094 13:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.094 13:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:22.094 13:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:22.352 00:17:22.352 13:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:22.352 13:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:22.352 13:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.610 13:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.610 13:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.610 13:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.610 13:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.610 13:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.610 13:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:22.610 { 00:17:22.610 "cntlid": 1, 00:17:22.610 "qid": 0, 00:17:22.610 "state": "enabled", 00:17:22.610 "listen_address": { 00:17:22.610 "trtype": "TCP", 00:17:22.610 "adrfam": "IPv4", 00:17:22.610 "traddr": "10.0.0.2", 00:17:22.610 "trsvcid": "4420" 00:17:22.610 }, 00:17:22.610 "peer_address": { 00:17:22.610 "trtype": "TCP", 00:17:22.610 "adrfam": "IPv4", 00:17:22.610 "traddr": "10.0.0.1", 00:17:22.610 "trsvcid": "49710" 00:17:22.610 }, 00:17:22.610 "auth": { 00:17:22.610 "state": "completed", 00:17:22.610 "digest": "sha256", 00:17:22.610 "dhgroup": "null" 00:17:22.610 } 00:17:22.610 } 00:17:22.610 ]' 00:17:22.610 13:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:22.610 13:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:22.610 13:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:22.610 13:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:22.610 13:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:22.868 13:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.868 13:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.868 13:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.125 13:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:00:MWE3NzBjNTAwNzQ5ODVhMTk0OGY1MGY4YzkwNmFhMjc2ODNmMzcyZjBkM2VjMzY17CoG/w==: 00:17:28.400 13:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.400 13:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:17:28.400 13:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.400 13:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.400 13:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.400 13:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:28.400 13:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:28.400 13:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:28.400 13:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:17:28.400 13:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:28.400 13:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:28.400 13:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:28.400 13:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:28.400 13:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key1 00:17:28.400 13:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.400 13:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.400 13:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.400 13:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:28.400 13:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:28.400 00:17:28.400 13:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:28.400 13:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:28.400 13:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.401 13:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.401 13:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.401 13:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.401 13:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.401 13:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.401 13:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:28.401 { 00:17:28.401 "cntlid": 3, 00:17:28.401 "qid": 0, 00:17:28.401 "state": "enabled", 00:17:28.401 "listen_address": { 00:17:28.401 "trtype": "TCP", 00:17:28.401 "adrfam": "IPv4", 00:17:28.401 "traddr": "10.0.0.2", 00:17:28.401 "trsvcid": "4420" 00:17:28.401 }, 00:17:28.401 "peer_address": { 00:17:28.401 "trtype": "TCP", 00:17:28.401 "adrfam": "IPv4", 00:17:28.401 "traddr": "10.0.0.1", 00:17:28.401 "trsvcid": "49736" 00:17:28.401 }, 00:17:28.401 "auth": { 00:17:28.401 "state": "completed", 00:17:28.401 "digest": "sha256", 00:17:28.401 "dhgroup": "null" 00:17:28.401 } 00:17:28.401 } 00:17:28.401 ]' 00:17:28.401 13:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:28.401 13:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.401 13:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:28.658 13:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:28.658 13:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:28.658 13:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.658 13:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.658 13:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.915 13:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:01:ZTgwMGJlYWYxZTQxNzg0ZTBlZDg5YzM4ZjU3ZDE1MjSsT9JX: 00:17:29.479 13:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.479 13:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:17:29.479 13:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.479 13:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.479 13:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.479 13:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:29.479 13:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:29.479 13:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:30.045 13:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:17:30.045 13:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:30.045 13:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:30.045 13:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:30.045 13:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:30.045 13:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key2 00:17:30.045 13:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.045 13:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.045 13:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.045 13:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:30.045 13:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:30.303 00:17:30.303 13:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:30.303 13:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:30.303 13:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.559 13:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.559 13:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.559 13:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.559 13:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.559 13:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.559 13:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:30.559 { 00:17:30.559 "cntlid": 5, 00:17:30.559 "qid": 0, 00:17:30.559 "state": "enabled", 00:17:30.559 "listen_address": { 00:17:30.559 "trtype": "TCP", 00:17:30.559 "adrfam": "IPv4", 00:17:30.559 "traddr": "10.0.0.2", 00:17:30.559 "trsvcid": "4420" 00:17:30.559 }, 00:17:30.559 "peer_address": { 00:17:30.559 "trtype": "TCP", 00:17:30.559 "adrfam": "IPv4", 00:17:30.559 "traddr": "10.0.0.1", 00:17:30.559 "trsvcid": "49758" 00:17:30.559 }, 00:17:30.559 "auth": { 00:17:30.559 "state": "completed", 00:17:30.559 "digest": "sha256", 00:17:30.559 "dhgroup": "null" 00:17:30.559 } 00:17:30.559 } 00:17:30.559 ]' 00:17:30.559 13:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:30.559 13:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:30.559 13:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:30.559 13:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:30.559 13:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:30.559 13:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.560 13:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.560 13:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.817 13:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:02:YjFmNjdjYjNmOWJiMmE4NzMyOTVmZTcyNWU0M2NjNWFjMDI0YTVmNjBjYjA3ODJlRYRf4g==: 00:17:31.750 13:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.750 13:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:17:31.750 13:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.750 13:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.750 13:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.750 13:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:31.750 13:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:31.750 13:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:31.750 13:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:17:31.750 13:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:31.750 13:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:31.750 13:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:31.750 13:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:31.750 13:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key3 00:17:31.750 13:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.750 13:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.750 13:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.750 13:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:31.750 13:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:32.009 00:17:32.009 13:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:32.009 13:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.009 13:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:32.267 13:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.267 13:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.267 13:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.267 13:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.525 13:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.525 13:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:32.525 { 00:17:32.525 "cntlid": 7, 00:17:32.525 "qid": 0, 00:17:32.525 "state": "enabled", 00:17:32.525 "listen_address": { 00:17:32.525 "trtype": "TCP", 00:17:32.525 "adrfam": "IPv4", 00:17:32.525 "traddr": "10.0.0.2", 00:17:32.525 "trsvcid": "4420" 00:17:32.525 }, 00:17:32.525 "peer_address": { 00:17:32.525 "trtype": "TCP", 00:17:32.525 "adrfam": "IPv4", 00:17:32.525 "traddr": "10.0.0.1", 00:17:32.525 "trsvcid": "49764" 00:17:32.525 }, 00:17:32.525 "auth": { 00:17:32.525 "state": "completed", 00:17:32.525 "digest": "sha256", 00:17:32.525 "dhgroup": "null" 00:17:32.525 } 00:17:32.525 } 00:17:32.525 ]' 00:17:32.525 13:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:32.526 13:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:32.526 13:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:32.526 13:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:32.526 13:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:32.526 13:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.526 13:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.526 13:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.783 13:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:03:MGEwNGIwMGY0ZjJjODQwZWI0MDg3M2ZkOTdjNzQxYWQ3NzM4MDVlYzc1ZDI5OGI1MTM1NzQxZTdmZTgzZGE0N0/ghaY=: 00:17:33.718 13:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.718 13:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:17:33.718 13:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.718 13:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.718 13:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.718 13:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:33.718 13:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:33.718 13:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:33.718 13:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:33.976 13:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:17:33.976 13:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:33.976 13:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:33.976 13:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:33.976 13:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:33.976 13:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key0 00:17:33.976 13:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.976 13:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.976 13:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.976 13:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:33.977 13:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:34.234 00:17:34.234 13:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:34.234 13:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:34.234 13:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.494 13:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.494 13:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.494 13:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.494 13:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.494 13:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.494 13:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:34.494 { 00:17:34.494 "cntlid": 9, 00:17:34.494 "qid": 0, 00:17:34.494 "state": "enabled", 00:17:34.494 "listen_address": { 00:17:34.494 "trtype": "TCP", 00:17:34.494 "adrfam": "IPv4", 00:17:34.494 "traddr": "10.0.0.2", 00:17:34.494 "trsvcid": "4420" 00:17:34.494 }, 00:17:34.494 "peer_address": { 00:17:34.494 "trtype": "TCP", 00:17:34.494 "adrfam": "IPv4", 00:17:34.494 "traddr": "10.0.0.1", 00:17:34.494 "trsvcid": "38672" 00:17:34.494 }, 00:17:34.494 "auth": { 00:17:34.494 "state": "completed", 00:17:34.494 "digest": "sha256", 00:17:34.494 "dhgroup": "ffdhe2048" 00:17:34.494 } 00:17:34.494 } 00:17:34.494 ]' 00:17:34.494 13:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:34.494 13:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:34.494 13:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:34.753 13:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:34.753 13:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:34.753 13:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.753 13:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.753 13:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.010 13:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:00:MWE3NzBjNTAwNzQ5ODVhMTk0OGY1MGY4YzkwNmFhMjc2ODNmMzcyZjBkM2VjMzY17CoG/w==: 00:17:35.601 13:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.601 13:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:17:35.601 13:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.601 13:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.601 13:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.601 13:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:35.601 13:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:35.601 13:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:35.860 13:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:17:35.860 13:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:35.860 13:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:35.860 13:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:35.860 13:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:35.860 13:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key1 00:17:35.860 13:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.860 13:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.860 13:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.860 13:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:35.860 13:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:36.117 00:17:36.117 13:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:36.117 13:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.117 13:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:36.377 13:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.377 13:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.377 13:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.377 13:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.377 13:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.377 13:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:36.377 { 00:17:36.377 "cntlid": 11, 00:17:36.377 "qid": 0, 00:17:36.377 "state": "enabled", 00:17:36.377 "listen_address": { 00:17:36.377 "trtype": "TCP", 00:17:36.377 "adrfam": "IPv4", 00:17:36.377 "traddr": "10.0.0.2", 00:17:36.377 "trsvcid": "4420" 00:17:36.377 }, 00:17:36.377 "peer_address": { 00:17:36.377 "trtype": "TCP", 00:17:36.377 "adrfam": "IPv4", 00:17:36.377 "traddr": "10.0.0.1", 00:17:36.377 "trsvcid": "38706" 00:17:36.377 }, 00:17:36.377 "auth": { 00:17:36.377 "state": "completed", 00:17:36.377 "digest": "sha256", 00:17:36.377 "dhgroup": "ffdhe2048" 00:17:36.377 } 00:17:36.377 } 00:17:36.377 ]' 00:17:36.377 13:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:36.377 13:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:36.377 13:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:36.377 13:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:36.377 13:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:36.637 13:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.637 13:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.637 13:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.895 13:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:01:ZTgwMGJlYWYxZTQxNzg0ZTBlZDg5YzM4ZjU3ZDE1MjSsT9JX: 00:17:37.460 13:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.460 13:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:17:37.460 13:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.460 13:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.460 13:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.460 13:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:37.460 13:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:37.460 13:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:37.719 13:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:17:37.719 13:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:37.719 13:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:37.719 13:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:37.719 13:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:37.719 13:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key2 00:17:37.719 13:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.719 13:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.719 13:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.719 13:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:37.719 13:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:37.978 00:17:37.978 13:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:37.978 13:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.978 13:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:38.237 13:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.237 13:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.237 13:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.237 13:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.237 13:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.237 13:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:38.237 { 00:17:38.237 "cntlid": 13, 00:17:38.237 "qid": 0, 00:17:38.237 "state": "enabled", 00:17:38.237 "listen_address": { 00:17:38.237 "trtype": "TCP", 00:17:38.237 "adrfam": "IPv4", 00:17:38.237 "traddr": "10.0.0.2", 00:17:38.237 "trsvcid": "4420" 00:17:38.237 }, 00:17:38.237 "peer_address": { 00:17:38.237 "trtype": "TCP", 00:17:38.237 "adrfam": "IPv4", 00:17:38.237 "traddr": "10.0.0.1", 00:17:38.237 "trsvcid": "38726" 00:17:38.237 }, 00:17:38.237 "auth": { 00:17:38.237 "state": "completed", 00:17:38.237 "digest": "sha256", 00:17:38.237 "dhgroup": "ffdhe2048" 00:17:38.237 } 00:17:38.237 } 00:17:38.237 ]' 00:17:38.237 13:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:38.496 13:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:38.496 13:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:38.496 13:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:38.496 13:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:38.496 13:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.496 13:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.496 13:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.754 13:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:02:YjFmNjdjYjNmOWJiMmE4NzMyOTVmZTcyNWU0M2NjNWFjMDI0YTVmNjBjYjA3ODJlRYRf4g==: 00:17:39.689 13:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.689 13:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:17:39.689 13:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.689 13:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.689 13:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.689 13:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:39.689 13:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:39.689 13:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:39.689 13:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:17:39.689 13:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:39.689 13:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:39.689 13:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:39.689 13:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:39.689 13:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key3 00:17:39.689 13:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.689 13:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.689 13:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.689 13:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.689 13:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:40.266 00:17:40.266 13:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:40.266 13:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.266 13:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:40.531 13:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.531 13:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.531 13:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.531 13:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.531 13:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.531 13:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:40.531 { 00:17:40.531 "cntlid": 15, 00:17:40.531 "qid": 0, 00:17:40.531 "state": "enabled", 00:17:40.531 "listen_address": { 00:17:40.531 "trtype": "TCP", 00:17:40.531 "adrfam": "IPv4", 00:17:40.531 "traddr": "10.0.0.2", 00:17:40.531 "trsvcid": "4420" 00:17:40.531 }, 00:17:40.531 "peer_address": { 00:17:40.531 "trtype": "TCP", 00:17:40.531 "adrfam": "IPv4", 00:17:40.531 "traddr": "10.0.0.1", 00:17:40.531 "trsvcid": "38754" 00:17:40.531 }, 00:17:40.531 "auth": { 00:17:40.531 "state": "completed", 00:17:40.531 "digest": "sha256", 00:17:40.531 "dhgroup": "ffdhe2048" 00:17:40.531 } 00:17:40.531 } 00:17:40.531 ]' 00:17:40.531 13:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:40.531 13:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.531 13:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:40.531 13:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:40.531 13:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:40.531 13:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.531 13:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.531 13:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.792 13:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:03:MGEwNGIwMGY0ZjJjODQwZWI0MDg3M2ZkOTdjNzQxYWQ3NzM4MDVlYzc1ZDI5OGI1MTM1NzQxZTdmZTgzZGE0N0/ghaY=: 00:17:41.729 13:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.729 13:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:17:41.729 13:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.729 13:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.729 13:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.729 13:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.729 13:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:41.729 13:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:41.729 13:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:41.729 13:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:17:41.729 13:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:41.729 13:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:41.729 13:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:41.729 13:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:41.729 13:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key0 00:17:41.729 13:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.729 13:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.729 13:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.729 13:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:41.729 13:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:42.296 00:17:42.296 13:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:42.296 13:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:42.296 13:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.554 13:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.554 13:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.554 13:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.554 13:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.554 13:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.554 13:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:42.554 { 00:17:42.554 "cntlid": 17, 00:17:42.554 "qid": 0, 00:17:42.554 "state": "enabled", 00:17:42.554 "listen_address": { 00:17:42.554 "trtype": "TCP", 00:17:42.554 "adrfam": "IPv4", 00:17:42.554 "traddr": "10.0.0.2", 00:17:42.554 "trsvcid": "4420" 00:17:42.554 }, 00:17:42.554 "peer_address": { 00:17:42.554 "trtype": "TCP", 00:17:42.554 "adrfam": "IPv4", 00:17:42.554 "traddr": "10.0.0.1", 00:17:42.554 "trsvcid": "38794" 00:17:42.554 }, 00:17:42.554 "auth": { 00:17:42.554 "state": "completed", 00:17:42.554 "digest": "sha256", 00:17:42.554 "dhgroup": "ffdhe3072" 00:17:42.554 } 00:17:42.554 } 00:17:42.554 ]' 00:17:42.554 13:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:42.554 13:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:42.554 13:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:42.554 13:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:42.554 13:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:42.554 13:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.554 13:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.554 13:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.812 13:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:00:MWE3NzBjNTAwNzQ5ODVhMTk0OGY1MGY4YzkwNmFhMjc2ODNmMzcyZjBkM2VjMzY17CoG/w==: 00:17:43.378 13:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.378 13:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:17:43.378 13:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.378 13:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.378 13:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.378 13:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:43.378 13:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:43.378 13:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:43.637 13:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:17:43.637 13:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:43.637 13:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:43.637 13:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:43.637 13:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:43.637 13:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key1 00:17:43.637 13:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.637 13:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.637 13:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.637 13:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:43.637 13:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:44.202 00:17:44.202 13:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:44.202 13:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.202 13:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:44.459 13:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.459 13:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.459 13:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.459 13:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.460 13:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.460 13:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:44.460 { 00:17:44.460 "cntlid": 19, 00:17:44.460 "qid": 0, 00:17:44.460 "state": "enabled", 00:17:44.460 "listen_address": { 00:17:44.460 "trtype": "TCP", 00:17:44.460 "adrfam": "IPv4", 00:17:44.460 "traddr": "10.0.0.2", 00:17:44.460 "trsvcid": "4420" 00:17:44.460 }, 00:17:44.460 "peer_address": { 00:17:44.460 "trtype": "TCP", 00:17:44.460 "adrfam": "IPv4", 00:17:44.460 "traddr": "10.0.0.1", 00:17:44.460 "trsvcid": "35030" 00:17:44.460 }, 00:17:44.460 "auth": { 00:17:44.460 "state": "completed", 00:17:44.460 "digest": "sha256", 00:17:44.460 "dhgroup": "ffdhe3072" 00:17:44.460 } 00:17:44.460 } 00:17:44.460 ]' 00:17:44.460 13:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:44.460 13:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.460 13:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:44.460 13:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:44.460 13:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:44.460 13:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.460 13:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.460 13:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.718 13:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:01:ZTgwMGJlYWYxZTQxNzg0ZTBlZDg5YzM4ZjU3ZDE1MjSsT9JX: 00:17:45.284 13:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.284 13:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:17:45.284 13:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.284 13:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.284 13:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.284 13:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:45.284 13:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:45.285 13:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:45.542 13:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:17:45.542 13:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:45.542 13:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:45.542 13:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:45.542 13:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:45.542 13:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key2 00:17:45.542 13:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.542 13:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.542 13:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.542 13:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:45.542 13:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:46.108 00:17:46.108 13:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:46.108 13:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:46.109 13:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.109 13:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.109 13:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.109 13:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.109 13:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.109 13:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.366 13:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:46.366 { 00:17:46.366 "cntlid": 21, 00:17:46.366 "qid": 0, 00:17:46.366 "state": "enabled", 00:17:46.366 "listen_address": { 00:17:46.366 "trtype": "TCP", 00:17:46.366 "adrfam": "IPv4", 00:17:46.366 "traddr": "10.0.0.2", 00:17:46.366 "trsvcid": "4420" 00:17:46.366 }, 00:17:46.366 "peer_address": { 00:17:46.366 "trtype": "TCP", 00:17:46.366 "adrfam": "IPv4", 00:17:46.366 "traddr": "10.0.0.1", 00:17:46.366 "trsvcid": "35064" 00:17:46.366 }, 00:17:46.366 "auth": { 00:17:46.366 "state": "completed", 00:17:46.366 "digest": "sha256", 00:17:46.366 "dhgroup": "ffdhe3072" 00:17:46.366 } 00:17:46.366 } 00:17:46.366 ]' 00:17:46.366 13:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:46.366 13:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.366 13:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:46.366 13:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:46.366 13:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:46.366 13:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.366 13:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.366 13:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.623 13:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:02:YjFmNjdjYjNmOWJiMmE4NzMyOTVmZTcyNWU0M2NjNWFjMDI0YTVmNjBjYjA3ODJlRYRf4g==: 00:17:47.189 13:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.189 13:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:17:47.189 13:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.189 13:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.189 13:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.189 13:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:47.189 13:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:47.189 13:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:47.447 13:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:17:47.447 13:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:47.447 13:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:47.447 13:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:47.447 13:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:47.447 13:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key3 00:17:47.447 13:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.447 13:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.447 13:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.448 13:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:47.448 13:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:47.704 00:17:47.962 13:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:47.962 13:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.962 13:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:48.220 13:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.220 13:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.220 13:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.220 13:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.220 13:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.220 13:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:48.220 { 00:17:48.220 "cntlid": 23, 00:17:48.220 "qid": 0, 00:17:48.220 "state": "enabled", 00:17:48.220 "listen_address": { 00:17:48.220 "trtype": "TCP", 00:17:48.220 "adrfam": "IPv4", 00:17:48.220 "traddr": "10.0.0.2", 00:17:48.220 "trsvcid": "4420" 00:17:48.220 }, 00:17:48.220 "peer_address": { 00:17:48.220 "trtype": "TCP", 00:17:48.220 "adrfam": "IPv4", 00:17:48.220 "traddr": "10.0.0.1", 00:17:48.220 "trsvcid": "35090" 00:17:48.220 }, 00:17:48.220 "auth": { 00:17:48.220 "state": "completed", 00:17:48.220 "digest": "sha256", 00:17:48.220 "dhgroup": "ffdhe3072" 00:17:48.220 } 00:17:48.220 } 00:17:48.220 ]' 00:17:48.220 13:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:48.220 13:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.220 13:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:48.220 13:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:48.220 13:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:48.220 13:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.220 13:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.220 13:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.477 13:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:03:MGEwNGIwMGY0ZjJjODQwZWI0MDg3M2ZkOTdjNzQxYWQ3NzM4MDVlYzc1ZDI5OGI1MTM1NzQxZTdmZTgzZGE0N0/ghaY=: 00:17:49.410 13:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.410 13:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:17:49.410 13:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.410 13:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.410 13:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.410 13:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.410 13:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:49.410 13:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:49.410 13:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:49.668 13:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:17:49.668 13:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:49.668 13:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:49.668 13:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:49.668 13:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:49.668 13:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key0 00:17:49.668 13:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.668 13:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.668 13:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.668 13:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:49.668 13:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:50.234 00:17:50.235 13:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:50.235 13:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.235 13:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:50.491 13:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.491 13:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.491 13:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.491 13:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.491 13:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.491 13:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:50.491 { 00:17:50.491 "cntlid": 25, 00:17:50.491 "qid": 0, 00:17:50.491 "state": "enabled", 00:17:50.491 "listen_address": { 00:17:50.491 "trtype": "TCP", 00:17:50.491 "adrfam": "IPv4", 00:17:50.491 "traddr": "10.0.0.2", 00:17:50.491 "trsvcid": "4420" 00:17:50.491 }, 00:17:50.491 "peer_address": { 00:17:50.491 "trtype": "TCP", 00:17:50.491 "adrfam": "IPv4", 00:17:50.491 "traddr": "10.0.0.1", 00:17:50.491 "trsvcid": "35128" 00:17:50.491 }, 00:17:50.491 "auth": { 00:17:50.491 "state": "completed", 00:17:50.491 "digest": "sha256", 00:17:50.491 "dhgroup": "ffdhe4096" 00:17:50.491 } 00:17:50.491 } 00:17:50.491 ]' 00:17:50.491 13:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:50.491 13:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.491 13:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:50.491 13:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:50.491 13:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:50.491 13:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.491 13:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.491 13:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.056 13:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:00:MWE3NzBjNTAwNzQ5ODVhMTk0OGY1MGY4YzkwNmFhMjc2ODNmMzcyZjBkM2VjMzY17CoG/w==: 00:17:51.621 13:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.621 13:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:17:51.621 13:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.621 13:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.621 13:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.621 13:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:51.621 13:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:51.621 13:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:51.880 13:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:17:51.880 13:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:51.880 13:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:51.881 13:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:51.881 13:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:51.881 13:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key1 00:17:51.881 13:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.881 13:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.881 13:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.881 13:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:51.881 13:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:52.447 00:17:52.447 13:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:52.447 13:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:52.447 13:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.705 13:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.705 13:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.705 13:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.705 13:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.705 13:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.705 13:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:52.705 { 00:17:52.705 "cntlid": 27, 00:17:52.705 "qid": 0, 00:17:52.705 "state": "enabled", 00:17:52.705 "listen_address": { 00:17:52.705 "trtype": "TCP", 00:17:52.705 "adrfam": "IPv4", 00:17:52.705 "traddr": "10.0.0.2", 00:17:52.705 "trsvcid": "4420" 00:17:52.706 }, 00:17:52.706 "peer_address": { 00:17:52.706 "trtype": "TCP", 00:17:52.706 "adrfam": "IPv4", 00:17:52.706 "traddr": "10.0.0.1", 00:17:52.706 "trsvcid": "49304" 00:17:52.706 }, 00:17:52.706 "auth": { 00:17:52.706 "state": "completed", 00:17:52.706 "digest": "sha256", 00:17:52.706 "dhgroup": "ffdhe4096" 00:17:52.706 } 00:17:52.706 } 00:17:52.706 ]' 00:17:52.706 13:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:52.706 13:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.706 13:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:52.706 13:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:52.706 13:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:52.706 13:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.706 13:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.706 13:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.965 13:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:01:ZTgwMGJlYWYxZTQxNzg0ZTBlZDg5YzM4ZjU3ZDE1MjSsT9JX: 00:17:53.900 13:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.900 13:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:17:53.900 13:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.900 13:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.900 13:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.900 13:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:53.900 13:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:53.900 13:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:54.157 13:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:17:54.157 13:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:54.157 13:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:54.157 13:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:54.157 13:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:54.157 13:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key2 00:17:54.157 13:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.157 13:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.157 13:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.157 13:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:54.157 13:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:54.722 00:17:54.722 13:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:54.722 13:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:54.722 13:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.722 13:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.722 13:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.722 13:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.722 13:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.979 13:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.979 13:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:54.979 { 00:17:54.979 "cntlid": 29, 00:17:54.979 "qid": 0, 00:17:54.979 "state": "enabled", 00:17:54.979 "listen_address": { 00:17:54.979 "trtype": "TCP", 00:17:54.979 "adrfam": "IPv4", 00:17:54.979 "traddr": "10.0.0.2", 00:17:54.979 "trsvcid": "4420" 00:17:54.979 }, 00:17:54.979 "peer_address": { 00:17:54.979 "trtype": "TCP", 00:17:54.979 "adrfam": "IPv4", 00:17:54.979 "traddr": "10.0.0.1", 00:17:54.979 "trsvcid": "49324" 00:17:54.979 }, 00:17:54.979 "auth": { 00:17:54.979 "state": "completed", 00:17:54.979 "digest": "sha256", 00:17:54.979 "dhgroup": "ffdhe4096" 00:17:54.979 } 00:17:54.979 } 00:17:54.979 ]' 00:17:54.979 13:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:54.979 13:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.979 13:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:54.979 13:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:54.979 13:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:54.979 13:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.979 13:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.979 13:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.237 13:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:02:YjFmNjdjYjNmOWJiMmE4NzMyOTVmZTcyNWU0M2NjNWFjMDI0YTVmNjBjYjA3ODJlRYRf4g==: 00:17:55.807 13:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.807 13:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:17:55.807 13:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.807 13:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.807 13:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.807 13:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:55.807 13:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:55.807 13:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:56.081 13:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:17:56.081 13:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:56.081 13:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:56.081 13:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:56.081 13:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:56.081 13:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key3 00:17:56.081 13:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.081 13:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.081 13:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.081 13:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.081 13:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.648 00:17:56.648 13:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:56.648 13:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.648 13:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:56.648 13:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.648 13:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.648 13:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.909 13:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.909 13:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.909 13:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:56.909 { 00:17:56.909 "cntlid": 31, 00:17:56.909 "qid": 0, 00:17:56.909 "state": "enabled", 00:17:56.909 "listen_address": { 00:17:56.909 "trtype": "TCP", 00:17:56.909 "adrfam": "IPv4", 00:17:56.909 "traddr": "10.0.0.2", 00:17:56.909 "trsvcid": "4420" 00:17:56.909 }, 00:17:56.909 "peer_address": { 00:17:56.909 "trtype": "TCP", 00:17:56.909 "adrfam": "IPv4", 00:17:56.909 "traddr": "10.0.0.1", 00:17:56.909 "trsvcid": "49356" 00:17:56.909 }, 00:17:56.909 "auth": { 00:17:56.909 "state": "completed", 00:17:56.909 "digest": "sha256", 00:17:56.909 "dhgroup": "ffdhe4096" 00:17:56.909 } 00:17:56.909 } 00:17:56.909 ]' 00:17:56.909 13:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:56.909 13:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.909 13:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:56.909 13:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:56.909 13:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:56.909 13:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.909 13:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.909 13:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.166 13:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:03:MGEwNGIwMGY0ZjJjODQwZWI0MDg3M2ZkOTdjNzQxYWQ3NzM4MDVlYzc1ZDI5OGI1MTM1NzQxZTdmZTgzZGE0N0/ghaY=: 00:17:57.733 13:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.733 13:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:17:57.733 13:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.733 13:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.733 13:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.733 13:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.733 13:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:57.733 13:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:57.733 13:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:58.300 13:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:17:58.300 13:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:58.300 13:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:58.300 13:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:58.300 13:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:58.300 13:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key0 00:17:58.300 13:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.300 13:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.300 13:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.300 13:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:58.300 13:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:58.563 00:17:58.563 13:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:58.563 13:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.563 13:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:58.821 13:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.821 13:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.821 13:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.821 13:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.821 13:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.821 13:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:58.821 { 00:17:58.821 "cntlid": 33, 00:17:58.821 "qid": 0, 00:17:58.821 "state": "enabled", 00:17:58.821 "listen_address": { 00:17:58.821 "trtype": "TCP", 00:17:58.821 "adrfam": "IPv4", 00:17:58.821 "traddr": "10.0.0.2", 00:17:58.821 "trsvcid": "4420" 00:17:58.821 }, 00:17:58.821 "peer_address": { 00:17:58.821 "trtype": "TCP", 00:17:58.821 "adrfam": "IPv4", 00:17:58.821 "traddr": "10.0.0.1", 00:17:58.821 "trsvcid": "49382" 00:17:58.821 }, 00:17:58.821 "auth": { 00:17:58.821 "state": "completed", 00:17:58.821 "digest": "sha256", 00:17:58.821 "dhgroup": "ffdhe6144" 00:17:58.821 } 00:17:58.821 } 00:17:58.821 ]' 00:17:58.821 13:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:59.080 13:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.080 13:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:59.080 13:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:59.080 13:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:59.080 13:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.080 13:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.080 13:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.338 13:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:00:MWE3NzBjNTAwNzQ5ODVhMTk0OGY1MGY4YzkwNmFhMjc2ODNmMzcyZjBkM2VjMzY17CoG/w==: 00:18:00.273 13:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.273 13:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:00.273 13:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.273 13:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.273 13:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.273 13:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:00.273 13:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:00.273 13:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:00.273 13:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:18:00.273 13:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:00.273 13:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:00.273 13:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:00.273 13:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:00.273 13:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key1 00:18:00.273 13:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.273 13:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.273 13:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.273 13:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:00.273 13:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:00.841 00:18:00.841 13:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:00.841 13:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:00.841 13:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.116 13:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.116 13:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.116 13:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.116 13:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.116 13:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.116 13:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:01.116 { 00:18:01.116 "cntlid": 35, 00:18:01.116 "qid": 0, 00:18:01.116 "state": "enabled", 00:18:01.116 "listen_address": { 00:18:01.116 "trtype": "TCP", 00:18:01.116 "adrfam": "IPv4", 00:18:01.116 "traddr": "10.0.0.2", 00:18:01.116 "trsvcid": "4420" 00:18:01.116 }, 00:18:01.116 "peer_address": { 00:18:01.116 "trtype": "TCP", 00:18:01.116 "adrfam": "IPv4", 00:18:01.116 "traddr": "10.0.0.1", 00:18:01.116 "trsvcid": "49422" 00:18:01.116 }, 00:18:01.116 "auth": { 00:18:01.116 "state": "completed", 00:18:01.116 "digest": "sha256", 00:18:01.116 "dhgroup": "ffdhe6144" 00:18:01.116 } 00:18:01.116 } 00:18:01.116 ]' 00:18:01.116 13:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:01.116 13:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.116 13:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:01.116 13:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:01.116 13:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:01.116 13:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.116 13:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.116 13:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.682 13:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:01:ZTgwMGJlYWYxZTQxNzg0ZTBlZDg5YzM4ZjU3ZDE1MjSsT9JX: 00:18:02.248 13:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.249 13:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:02.249 13:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.249 13:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.249 13:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.249 13:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:02.249 13:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:02.249 13:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:02.507 13:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:18:02.507 13:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:02.507 13:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:02.507 13:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:02.507 13:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:02.507 13:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key2 00:18:02.507 13:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.507 13:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.507 13:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.507 13:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:02.507 13:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:03.072 00:18:03.072 13:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:03.072 13:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.072 13:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:03.330 13:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.330 13:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.330 13:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.330 13:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.330 13:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.330 13:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:03.330 { 00:18:03.330 "cntlid": 37, 00:18:03.330 "qid": 0, 00:18:03.330 "state": "enabled", 00:18:03.330 "listen_address": { 00:18:03.330 "trtype": "TCP", 00:18:03.330 "adrfam": "IPv4", 00:18:03.330 "traddr": "10.0.0.2", 00:18:03.330 "trsvcid": "4420" 00:18:03.330 }, 00:18:03.330 "peer_address": { 00:18:03.330 "trtype": "TCP", 00:18:03.330 "adrfam": "IPv4", 00:18:03.330 "traddr": "10.0.0.1", 00:18:03.331 "trsvcid": "34230" 00:18:03.331 }, 00:18:03.331 "auth": { 00:18:03.331 "state": "completed", 00:18:03.331 "digest": "sha256", 00:18:03.331 "dhgroup": "ffdhe6144" 00:18:03.331 } 00:18:03.331 } 00:18:03.331 ]' 00:18:03.331 13:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:03.331 13:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.331 13:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:03.331 13:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:03.331 13:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:03.331 13:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.331 13:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.331 13:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.942 13:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:02:YjFmNjdjYjNmOWJiMmE4NzMyOTVmZTcyNWU0M2NjNWFjMDI0YTVmNjBjYjA3ODJlRYRf4g==: 00:18:04.508 13:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.508 13:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:04.508 13:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.508 13:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.508 13:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.508 13:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:04.508 13:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:04.509 13:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:04.767 13:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:18:04.767 13:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:04.767 13:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:04.767 13:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:04.767 13:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:04.767 13:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key3 00:18:04.767 13:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.767 13:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.767 13:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.767 13:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:04.767 13:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.334 00:18:05.334 13:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:05.334 13:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:05.334 13:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.334 13:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.334 13:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.334 13:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.334 13:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.334 13:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.334 13:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:05.334 { 00:18:05.334 "cntlid": 39, 00:18:05.334 "qid": 0, 00:18:05.334 "state": "enabled", 00:18:05.334 "listen_address": { 00:18:05.334 "trtype": "TCP", 00:18:05.334 "adrfam": "IPv4", 00:18:05.334 "traddr": "10.0.0.2", 00:18:05.334 "trsvcid": "4420" 00:18:05.334 }, 00:18:05.334 "peer_address": { 00:18:05.334 "trtype": "TCP", 00:18:05.334 "adrfam": "IPv4", 00:18:05.334 "traddr": "10.0.0.1", 00:18:05.334 "trsvcid": "34262" 00:18:05.334 }, 00:18:05.334 "auth": { 00:18:05.334 "state": "completed", 00:18:05.334 "digest": "sha256", 00:18:05.334 "dhgroup": "ffdhe6144" 00:18:05.334 } 00:18:05.334 } 00:18:05.334 ]' 00:18:05.334 13:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:05.591 13:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.591 13:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:05.591 13:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:05.591 13:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:05.591 13:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.591 13:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.591 13:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.849 13:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:03:MGEwNGIwMGY0ZjJjODQwZWI0MDg3M2ZkOTdjNzQxYWQ3NzM4MDVlYzc1ZDI5OGI1MTM1NzQxZTdmZTgzZGE0N0/ghaY=: 00:18:06.414 13:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.414 13:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:06.414 13:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.414 13:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.414 13:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.414 13:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.414 13:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:06.414 13:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:06.414 13:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:06.672 13:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:18:06.672 13:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:06.672 13:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:06.672 13:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:06.672 13:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:06.672 13:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key0 00:18:06.672 13:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.672 13:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.672 13:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.672 13:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:06.672 13:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:07.608 00:18:07.608 13:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:07.608 13:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:07.608 13:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.608 13:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.608 13:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.608 13:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.608 13:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.608 13:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.608 13:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:07.608 { 00:18:07.608 "cntlid": 41, 00:18:07.608 "qid": 0, 00:18:07.608 "state": "enabled", 00:18:07.608 "listen_address": { 00:18:07.608 "trtype": "TCP", 00:18:07.608 "adrfam": "IPv4", 00:18:07.608 "traddr": "10.0.0.2", 00:18:07.608 "trsvcid": "4420" 00:18:07.608 }, 00:18:07.608 "peer_address": { 00:18:07.608 "trtype": "TCP", 00:18:07.608 "adrfam": "IPv4", 00:18:07.608 "traddr": "10.0.0.1", 00:18:07.608 "trsvcid": "34292" 00:18:07.608 }, 00:18:07.608 "auth": { 00:18:07.608 "state": "completed", 00:18:07.608 "digest": "sha256", 00:18:07.608 "dhgroup": "ffdhe8192" 00:18:07.608 } 00:18:07.608 } 00:18:07.608 ]' 00:18:07.608 13:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:07.870 13:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.870 13:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:07.870 13:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:07.870 13:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:07.870 13:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.870 13:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.870 13:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.128 13:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:00:MWE3NzBjNTAwNzQ5ODVhMTk0OGY1MGY4YzkwNmFhMjc2ODNmMzcyZjBkM2VjMzY17CoG/w==: 00:18:08.697 13:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.697 13:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:08.697 13:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.697 13:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.697 13:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.697 13:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:08.697 13:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:08.697 13:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:08.956 13:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:18:08.957 13:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:08.957 13:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:08.957 13:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:08.957 13:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:08.957 13:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key1 00:18:08.957 13:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.957 13:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.957 13:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.957 13:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:08.957 13:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:09.983 00:18:09.983 13:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:09.983 13:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:09.983 13:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.246 13:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.246 13:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.246 13:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.246 13:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.246 13:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.246 13:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:10.246 { 00:18:10.246 "cntlid": 43, 00:18:10.246 "qid": 0, 00:18:10.246 "state": "enabled", 00:18:10.246 "listen_address": { 00:18:10.246 "trtype": "TCP", 00:18:10.246 "adrfam": "IPv4", 00:18:10.246 "traddr": "10.0.0.2", 00:18:10.246 "trsvcid": "4420" 00:18:10.246 }, 00:18:10.246 "peer_address": { 00:18:10.246 "trtype": "TCP", 00:18:10.246 "adrfam": "IPv4", 00:18:10.246 "traddr": "10.0.0.1", 00:18:10.246 "trsvcid": "34318" 00:18:10.246 }, 00:18:10.246 "auth": { 00:18:10.246 "state": "completed", 00:18:10.246 "digest": "sha256", 00:18:10.246 "dhgroup": "ffdhe8192" 00:18:10.246 } 00:18:10.246 } 00:18:10.246 ]' 00:18:10.246 13:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:10.246 13:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.246 13:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:10.246 13:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:10.246 13:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:10.246 13:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.246 13:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.246 13:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.504 13:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:01:ZTgwMGJlYWYxZTQxNzg0ZTBlZDg5YzM4ZjU3ZDE1MjSsT9JX: 00:18:11.442 13:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.442 13:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:11.442 13:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.442 13:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.442 13:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.442 13:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:11.442 13:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:11.442 13:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:11.700 13:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:18:11.700 13:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:11.700 13:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:11.700 13:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:11.700 13:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:11.700 13:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key2 00:18:11.700 13:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.700 13:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.700 13:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.700 13:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:11.700 13:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:12.265 00:18:12.265 13:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:12.265 13:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.265 13:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:12.523 13:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.523 13:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.523 13:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.523 13:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.523 13:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.523 13:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:12.523 { 00:18:12.523 "cntlid": 45, 00:18:12.523 "qid": 0, 00:18:12.523 "state": "enabled", 00:18:12.523 "listen_address": { 00:18:12.523 "trtype": "TCP", 00:18:12.523 "adrfam": "IPv4", 00:18:12.523 "traddr": "10.0.0.2", 00:18:12.523 "trsvcid": "4420" 00:18:12.523 }, 00:18:12.523 "peer_address": { 00:18:12.523 "trtype": "TCP", 00:18:12.523 "adrfam": "IPv4", 00:18:12.523 "traddr": "10.0.0.1", 00:18:12.523 "trsvcid": "34344" 00:18:12.523 }, 00:18:12.523 "auth": { 00:18:12.523 "state": "completed", 00:18:12.523 "digest": "sha256", 00:18:12.523 "dhgroup": "ffdhe8192" 00:18:12.523 } 00:18:12.523 } 00:18:12.523 ]' 00:18:12.523 13:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:12.523 13:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.523 13:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:12.523 13:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:12.523 13:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:12.523 13:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.523 13:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.523 13:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.136 13:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:02:YjFmNjdjYjNmOWJiMmE4NzMyOTVmZTcyNWU0M2NjNWFjMDI0YTVmNjBjYjA3ODJlRYRf4g==: 00:18:13.704 13:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.704 13:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:13.704 13:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.704 13:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.704 13:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.704 13:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:13.704 13:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:13.704 13:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:13.704 13:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:18:13.704 13:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:13.704 13:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:13.704 13:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:13.704 13:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:13.704 13:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key3 00:18:13.704 13:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.704 13:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.704 13:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.704 13:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.704 13:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:14.639 00:18:14.639 13:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:14.639 13:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.639 13:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:14.639 13:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.639 13:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.639 13:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.639 13:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.639 13:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.639 13:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:14.639 { 00:18:14.639 "cntlid": 47, 00:18:14.639 "qid": 0, 00:18:14.639 "state": "enabled", 00:18:14.639 "listen_address": { 00:18:14.639 "trtype": "TCP", 00:18:14.639 "adrfam": "IPv4", 00:18:14.639 "traddr": "10.0.0.2", 00:18:14.639 "trsvcid": "4420" 00:18:14.639 }, 00:18:14.639 "peer_address": { 00:18:14.639 "trtype": "TCP", 00:18:14.639 "adrfam": "IPv4", 00:18:14.639 "traddr": "10.0.0.1", 00:18:14.639 "trsvcid": "58086" 00:18:14.639 }, 00:18:14.639 "auth": { 00:18:14.639 "state": "completed", 00:18:14.639 "digest": "sha256", 00:18:14.639 "dhgroup": "ffdhe8192" 00:18:14.639 } 00:18:14.639 } 00:18:14.639 ]' 00:18:14.639 13:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:14.639 13:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.639 13:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:14.639 13:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:14.639 13:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:14.897 13:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.897 13:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.897 13:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.897 13:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:03:MGEwNGIwMGY0ZjJjODQwZWI0MDg3M2ZkOTdjNzQxYWQ3NzM4MDVlYzc1ZDI5OGI1MTM1NzQxZTdmZTgzZGE0N0/ghaY=: 00:18:15.829 13:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.829 13:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:15.829 13:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.829 13:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.829 13:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.829 13:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:18:15.829 13:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.829 13:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:15.829 13:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:15.829 13:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:16.086 13:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:18:16.086 13:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:16.086 13:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:16.086 13:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:16.086 13:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:16.086 13:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key0 00:18:16.086 13:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.086 13:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.086 13:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.086 13:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:16.086 13:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:16.426 00:18:16.426 13:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:16.426 13:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.426 13:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:16.686 13:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.686 13:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.686 13:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.686 13:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.686 13:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.686 13:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:16.686 { 00:18:16.686 "cntlid": 49, 00:18:16.686 "qid": 0, 00:18:16.686 "state": "enabled", 00:18:16.686 "listen_address": { 00:18:16.686 "trtype": "TCP", 00:18:16.686 "adrfam": "IPv4", 00:18:16.686 "traddr": "10.0.0.2", 00:18:16.686 "trsvcid": "4420" 00:18:16.686 }, 00:18:16.686 "peer_address": { 00:18:16.686 "trtype": "TCP", 00:18:16.686 "adrfam": "IPv4", 00:18:16.686 "traddr": "10.0.0.1", 00:18:16.686 "trsvcid": "58114" 00:18:16.686 }, 00:18:16.686 "auth": { 00:18:16.686 "state": "completed", 00:18:16.686 "digest": "sha384", 00:18:16.686 "dhgroup": "null" 00:18:16.686 } 00:18:16.686 } 00:18:16.686 ]' 00:18:16.686 13:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:16.686 13:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.686 13:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:16.686 13:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:16.686 13:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:16.686 13:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.686 13:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.686 13:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.944 13:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:00:MWE3NzBjNTAwNzQ5ODVhMTk0OGY1MGY4YzkwNmFhMjc2ODNmMzcyZjBkM2VjMzY17CoG/w==: 00:18:17.876 13:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.876 13:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:17.876 13:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.876 13:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.876 13:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.876 13:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:17.876 13:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:17.876 13:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:18.133 13:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:18:18.133 13:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:18.133 13:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:18.133 13:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:18.133 13:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:18.133 13:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key1 00:18:18.133 13:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.133 13:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.133 13:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.133 13:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:18.133 13:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:18.390 00:18:18.390 13:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:18.390 13:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:18.390 13:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.649 13:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.649 13:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.649 13:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.649 13:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.649 13:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.649 13:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:18.649 { 00:18:18.649 "cntlid": 51, 00:18:18.649 "qid": 0, 00:18:18.649 "state": "enabled", 00:18:18.649 "listen_address": { 00:18:18.649 "trtype": "TCP", 00:18:18.649 "adrfam": "IPv4", 00:18:18.649 "traddr": "10.0.0.2", 00:18:18.649 "trsvcid": "4420" 00:18:18.649 }, 00:18:18.649 "peer_address": { 00:18:18.649 "trtype": "TCP", 00:18:18.649 "adrfam": "IPv4", 00:18:18.649 "traddr": "10.0.0.1", 00:18:18.649 "trsvcid": "58140" 00:18:18.649 }, 00:18:18.649 "auth": { 00:18:18.649 "state": "completed", 00:18:18.649 "digest": "sha384", 00:18:18.649 "dhgroup": "null" 00:18:18.649 } 00:18:18.649 } 00:18:18.649 ]' 00:18:18.649 13:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:18.649 13:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.649 13:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:18.649 13:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:18.649 13:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:18.649 13:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.649 13:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.649 13:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.906 13:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:01:ZTgwMGJlYWYxZTQxNzg0ZTBlZDg5YzM4ZjU3ZDE1MjSsT9JX: 00:18:19.472 13:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.472 13:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:19.472 13:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.472 13:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.472 13:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.472 13:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:19.472 13:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:19.472 13:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:19.729 13:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:18:19.729 13:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:19.729 13:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:19.729 13:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:19.729 13:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:19.729 13:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key2 00:18:19.729 13:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.729 13:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.729 13:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.730 13:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:19.730 13:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:20.295 00:18:20.295 13:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:20.295 13:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:20.295 13:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.295 13:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.295 13:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.295 13:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.295 13:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.295 13:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.295 13:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:20.295 { 00:18:20.295 "cntlid": 53, 00:18:20.295 "qid": 0, 00:18:20.295 "state": "enabled", 00:18:20.295 "listen_address": { 00:18:20.295 "trtype": "TCP", 00:18:20.295 "adrfam": "IPv4", 00:18:20.295 "traddr": "10.0.0.2", 00:18:20.295 "trsvcid": "4420" 00:18:20.295 }, 00:18:20.295 "peer_address": { 00:18:20.295 "trtype": "TCP", 00:18:20.295 "adrfam": "IPv4", 00:18:20.295 "traddr": "10.0.0.1", 00:18:20.295 "trsvcid": "58166" 00:18:20.295 }, 00:18:20.295 "auth": { 00:18:20.295 "state": "completed", 00:18:20.295 "digest": "sha384", 00:18:20.295 "dhgroup": "null" 00:18:20.295 } 00:18:20.295 } 00:18:20.295 ]' 00:18:20.295 13:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:20.553 13:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:20.553 13:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:20.553 13:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:20.553 13:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:20.553 13:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.553 13:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.553 13:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.811 13:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:02:YjFmNjdjYjNmOWJiMmE4NzMyOTVmZTcyNWU0M2NjNWFjMDI0YTVmNjBjYjA3ODJlRYRf4g==: 00:18:21.376 13:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.376 13:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:21.376 13:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.376 13:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.376 13:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.376 13:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:21.376 13:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:21.376 13:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:21.634 13:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:18:21.634 13:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:21.634 13:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:21.634 13:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:21.634 13:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:21.634 13:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key3 00:18:21.634 13:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.634 13:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.634 13:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.634 13:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.634 13:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.212 00:18:22.212 13:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:22.212 13:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:22.212 13:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.212 13:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.212 13:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.212 13:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.212 13:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.470 13:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.470 13:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:22.470 { 00:18:22.470 "cntlid": 55, 00:18:22.470 "qid": 0, 00:18:22.470 "state": "enabled", 00:18:22.470 "listen_address": { 00:18:22.470 "trtype": "TCP", 00:18:22.470 "adrfam": "IPv4", 00:18:22.470 "traddr": "10.0.0.2", 00:18:22.470 "trsvcid": "4420" 00:18:22.470 }, 00:18:22.470 "peer_address": { 00:18:22.470 "trtype": "TCP", 00:18:22.470 "adrfam": "IPv4", 00:18:22.470 "traddr": "10.0.0.1", 00:18:22.470 "trsvcid": "58200" 00:18:22.470 }, 00:18:22.470 "auth": { 00:18:22.470 "state": "completed", 00:18:22.470 "digest": "sha384", 00:18:22.470 "dhgroup": "null" 00:18:22.470 } 00:18:22.470 } 00:18:22.470 ]' 00:18:22.470 13:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:22.470 13:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:22.470 13:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:22.470 13:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:22.470 13:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:22.470 13:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.470 13:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.470 13:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.728 13:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:03:MGEwNGIwMGY0ZjJjODQwZWI0MDg3M2ZkOTdjNzQxYWQ3NzM4MDVlYzc1ZDI5OGI1MTM1NzQxZTdmZTgzZGE0N0/ghaY=: 00:18:23.661 13:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.661 13:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:23.661 13:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.662 13:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.662 13:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.662 13:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.662 13:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:23.662 13:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:23.662 13:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:23.662 13:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:18:23.662 13:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:23.662 13:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:23.662 13:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:23.662 13:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:23.662 13:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key0 00:18:23.662 13:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.662 13:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.662 13:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.662 13:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:23.662 13:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:23.982 00:18:23.982 13:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:23.982 13:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:23.982 13:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.251 13:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.251 13:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.251 13:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.251 13:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.251 13:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.251 13:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:24.251 { 00:18:24.251 "cntlid": 57, 00:18:24.251 "qid": 0, 00:18:24.251 "state": "enabled", 00:18:24.251 "listen_address": { 00:18:24.251 "trtype": "TCP", 00:18:24.251 "adrfam": "IPv4", 00:18:24.251 "traddr": "10.0.0.2", 00:18:24.251 "trsvcid": "4420" 00:18:24.251 }, 00:18:24.251 "peer_address": { 00:18:24.251 "trtype": "TCP", 00:18:24.251 "adrfam": "IPv4", 00:18:24.251 "traddr": "10.0.0.1", 00:18:24.251 "trsvcid": "40116" 00:18:24.251 }, 00:18:24.251 "auth": { 00:18:24.251 "state": "completed", 00:18:24.251 "digest": "sha384", 00:18:24.251 "dhgroup": "ffdhe2048" 00:18:24.251 } 00:18:24.251 } 00:18:24.251 ]' 00:18:24.251 13:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:24.509 13:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.509 13:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:24.509 13:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:24.509 13:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:24.509 13:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.509 13:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.509 13:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.767 13:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:00:MWE3NzBjNTAwNzQ5ODVhMTk0OGY1MGY4YzkwNmFhMjc2ODNmMzcyZjBkM2VjMzY17CoG/w==: 00:18:25.701 13:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.701 13:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:25.701 13:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.701 13:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.701 13:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.701 13:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:25.701 13:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:25.701 13:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:25.701 13:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:18:25.701 13:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:25.701 13:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:25.701 13:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:25.701 13:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:25.701 13:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key1 00:18:25.701 13:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.701 13:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.701 13:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.701 13:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:25.701 13:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:25.959 00:18:26.217 13:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:26.217 13:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:26.217 13:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.217 13:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.217 13:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.217 13:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.217 13:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.474 13:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.475 13:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:26.475 { 00:18:26.475 "cntlid": 59, 00:18:26.475 "qid": 0, 00:18:26.475 "state": "enabled", 00:18:26.475 "listen_address": { 00:18:26.475 "trtype": "TCP", 00:18:26.475 "adrfam": "IPv4", 00:18:26.475 "traddr": "10.0.0.2", 00:18:26.475 "trsvcid": "4420" 00:18:26.475 }, 00:18:26.475 "peer_address": { 00:18:26.475 "trtype": "TCP", 00:18:26.475 "adrfam": "IPv4", 00:18:26.475 "traddr": "10.0.0.1", 00:18:26.475 "trsvcid": "40154" 00:18:26.475 }, 00:18:26.475 "auth": { 00:18:26.475 "state": "completed", 00:18:26.475 "digest": "sha384", 00:18:26.475 "dhgroup": "ffdhe2048" 00:18:26.475 } 00:18:26.475 } 00:18:26.475 ]' 00:18:26.475 13:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:26.475 13:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.475 13:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:26.475 13:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:26.475 13:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:26.475 13:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.475 13:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.475 13:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.732 13:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:01:ZTgwMGJlYWYxZTQxNzg0ZTBlZDg5YzM4ZjU3ZDE1MjSsT9JX: 00:18:27.668 13:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.668 13:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:27.668 13:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.668 13:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.668 13:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.668 13:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:27.668 13:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:27.668 13:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:27.926 13:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:18:27.926 13:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:27.926 13:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:27.926 13:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:27.926 13:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:27.926 13:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key2 00:18:27.926 13:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.926 13:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.926 13:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.926 13:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:27.926 13:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:28.184 00:18:28.442 13:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:28.442 13:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:28.442 13:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.700 13:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.700 13:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.700 13:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.700 13:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.700 13:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.700 13:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:28.700 { 00:18:28.700 "cntlid": 61, 00:18:28.700 "qid": 0, 00:18:28.700 "state": "enabled", 00:18:28.700 "listen_address": { 00:18:28.700 "trtype": "TCP", 00:18:28.700 "adrfam": "IPv4", 00:18:28.700 "traddr": "10.0.0.2", 00:18:28.700 "trsvcid": "4420" 00:18:28.700 }, 00:18:28.700 "peer_address": { 00:18:28.700 "trtype": "TCP", 00:18:28.700 "adrfam": "IPv4", 00:18:28.700 "traddr": "10.0.0.1", 00:18:28.700 "trsvcid": "40174" 00:18:28.700 }, 00:18:28.700 "auth": { 00:18:28.700 "state": "completed", 00:18:28.700 "digest": "sha384", 00:18:28.700 "dhgroup": "ffdhe2048" 00:18:28.700 } 00:18:28.700 } 00:18:28.700 ]' 00:18:28.700 13:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:28.700 13:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.700 13:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:28.700 13:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:28.700 13:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:28.700 13:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.700 13:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.700 13:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.957 13:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:02:YjFmNjdjYjNmOWJiMmE4NzMyOTVmZTcyNWU0M2NjNWFjMDI0YTVmNjBjYjA3ODJlRYRf4g==: 00:18:29.892 13:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.892 13:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:29.892 13:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.892 13:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.892 13:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.892 13:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:29.892 13:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:29.892 13:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:30.149 13:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:18:30.149 13:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:30.149 13:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:30.149 13:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:30.149 13:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:30.149 13:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key3 00:18:30.149 13:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.149 13:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.149 13:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.149 13:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.149 13:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.715 00:18:30.715 13:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:30.715 13:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.715 13:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:30.972 13:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.972 13:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.972 13:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.972 13:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.972 13:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.972 13:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:30.972 { 00:18:30.972 "cntlid": 63, 00:18:30.972 "qid": 0, 00:18:30.972 "state": "enabled", 00:18:30.972 "listen_address": { 00:18:30.972 "trtype": "TCP", 00:18:30.972 "adrfam": "IPv4", 00:18:30.972 "traddr": "10.0.0.2", 00:18:30.972 "trsvcid": "4420" 00:18:30.972 }, 00:18:30.972 "peer_address": { 00:18:30.972 "trtype": "TCP", 00:18:30.972 "adrfam": "IPv4", 00:18:30.972 "traddr": "10.0.0.1", 00:18:30.972 "trsvcid": "40212" 00:18:30.972 }, 00:18:30.972 "auth": { 00:18:30.972 "state": "completed", 00:18:30.972 "digest": "sha384", 00:18:30.972 "dhgroup": "ffdhe2048" 00:18:30.972 } 00:18:30.972 } 00:18:30.972 ]' 00:18:30.972 13:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:30.972 13:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.972 13:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:30.972 13:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:30.972 13:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:30.972 13:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.972 13:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.972 13:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.230 13:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:03:MGEwNGIwMGY0ZjJjODQwZWI0MDg3M2ZkOTdjNzQxYWQ3NzM4MDVlYzc1ZDI5OGI1MTM1NzQxZTdmZTgzZGE0N0/ghaY=: 00:18:32.165 13:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.165 13:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:32.165 13:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.165 13:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.165 13:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.165 13:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:32.165 13:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:32.165 13:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:32.165 13:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:32.423 13:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:18:32.423 13:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:32.423 13:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:32.423 13:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:32.423 13:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:32.423 13:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key0 00:18:32.423 13:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.423 13:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.423 13:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.423 13:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:32.424 13:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:32.681 00:18:32.681 13:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:32.681 13:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.681 13:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:32.940 13:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.940 13:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.940 13:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.940 13:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.940 13:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.940 13:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:32.940 { 00:18:32.940 "cntlid": 65, 00:18:32.940 "qid": 0, 00:18:32.940 "state": "enabled", 00:18:32.940 "listen_address": { 00:18:32.940 "trtype": "TCP", 00:18:32.940 "adrfam": "IPv4", 00:18:32.940 "traddr": "10.0.0.2", 00:18:32.940 "trsvcid": "4420" 00:18:32.940 }, 00:18:32.940 "peer_address": { 00:18:32.940 "trtype": "TCP", 00:18:32.940 "adrfam": "IPv4", 00:18:32.940 "traddr": "10.0.0.1", 00:18:32.940 "trsvcid": "37644" 00:18:32.940 }, 00:18:32.940 "auth": { 00:18:32.940 "state": "completed", 00:18:32.940 "digest": "sha384", 00:18:32.940 "dhgroup": "ffdhe3072" 00:18:32.940 } 00:18:32.940 } 00:18:32.940 ]' 00:18:32.940 13:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:33.197 13:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.197 13:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:33.197 13:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:33.197 13:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:33.197 13:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.197 13:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.197 13:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.455 13:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:00:MWE3NzBjNTAwNzQ5ODVhMTk0OGY1MGY4YzkwNmFhMjc2ODNmMzcyZjBkM2VjMzY17CoG/w==: 00:18:34.387 13:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.387 13:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:34.387 13:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.387 13:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.387 13:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.387 13:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:34.387 13:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:34.387 13:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:34.643 13:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:18:34.643 13:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:34.644 13:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:34.644 13:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:34.644 13:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:34.644 13:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key1 00:18:34.644 13:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.644 13:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.644 13:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.644 13:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:34.644 13:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:34.900 00:18:34.900 13:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:34.900 13:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:34.900 13:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.527 13:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.527 13:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.527 13:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.527 13:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.527 13:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.527 13:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:35.527 { 00:18:35.527 "cntlid": 67, 00:18:35.527 "qid": 0, 00:18:35.527 "state": "enabled", 00:18:35.527 "listen_address": { 00:18:35.527 "trtype": "TCP", 00:18:35.527 "adrfam": "IPv4", 00:18:35.527 "traddr": "10.0.0.2", 00:18:35.527 "trsvcid": "4420" 00:18:35.527 }, 00:18:35.527 "peer_address": { 00:18:35.527 "trtype": "TCP", 00:18:35.527 "adrfam": "IPv4", 00:18:35.527 "traddr": "10.0.0.1", 00:18:35.527 "trsvcid": "37680" 00:18:35.527 }, 00:18:35.527 "auth": { 00:18:35.527 "state": "completed", 00:18:35.527 "digest": "sha384", 00:18:35.527 "dhgroup": "ffdhe3072" 00:18:35.527 } 00:18:35.527 } 00:18:35.527 ]' 00:18:35.527 13:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:35.527 13:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.527 13:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:35.527 13:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:35.527 13:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:35.527 13:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.527 13:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.527 13:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.785 13:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:01:ZTgwMGJlYWYxZTQxNzg0ZTBlZDg5YzM4ZjU3ZDE1MjSsT9JX: 00:18:36.716 13:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.716 13:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:36.716 13:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.716 13:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.716 13:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.716 13:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:36.716 13:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:36.716 13:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:36.974 13:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:18:36.974 13:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:36.974 13:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:36.974 13:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:36.974 13:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:36.974 13:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key2 00:18:36.974 13:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.974 13:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.974 13:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.974 13:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:36.974 13:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:37.539 00:18:37.539 13:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:37.539 13:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.539 13:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:37.795 13:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.795 13:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.795 13:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.795 13:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.796 13:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.796 13:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:37.796 { 00:18:37.796 "cntlid": 69, 00:18:37.796 "qid": 0, 00:18:37.796 "state": "enabled", 00:18:37.796 "listen_address": { 00:18:37.796 "trtype": "TCP", 00:18:37.796 "adrfam": "IPv4", 00:18:37.796 "traddr": "10.0.0.2", 00:18:37.796 "trsvcid": "4420" 00:18:37.796 }, 00:18:37.796 "peer_address": { 00:18:37.796 "trtype": "TCP", 00:18:37.796 "adrfam": "IPv4", 00:18:37.796 "traddr": "10.0.0.1", 00:18:37.796 "trsvcid": "37714" 00:18:37.796 }, 00:18:37.796 "auth": { 00:18:37.796 "state": "completed", 00:18:37.796 "digest": "sha384", 00:18:37.796 "dhgroup": "ffdhe3072" 00:18:37.796 } 00:18:37.796 } 00:18:37.796 ]' 00:18:37.796 13:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:37.796 13:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.796 13:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:37.796 13:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:37.796 13:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:37.796 13:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.796 13:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.796 13:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.053 13:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:02:YjFmNjdjYjNmOWJiMmE4NzMyOTVmZTcyNWU0M2NjNWFjMDI0YTVmNjBjYjA3ODJlRYRf4g==: 00:18:39.036 13:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.036 13:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:39.036 13:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.036 13:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.036 13:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.036 13:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:39.036 13:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:39.036 13:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:39.036 13:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:18:39.036 13:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:39.036 13:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:39.036 13:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:39.036 13:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:39.036 13:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key3 00:18:39.036 13:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.036 13:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.036 13:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.036 13:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:39.036 13:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:39.602 00:18:39.602 13:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:39.602 13:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:39.602 13:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.862 13:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.862 13:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.862 13:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.862 13:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.862 13:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.862 13:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:39.862 { 00:18:39.862 "cntlid": 71, 00:18:39.862 "qid": 0, 00:18:39.862 "state": "enabled", 00:18:39.862 "listen_address": { 00:18:39.862 "trtype": "TCP", 00:18:39.862 "adrfam": "IPv4", 00:18:39.862 "traddr": "10.0.0.2", 00:18:39.862 "trsvcid": "4420" 00:18:39.862 }, 00:18:39.862 "peer_address": { 00:18:39.862 "trtype": "TCP", 00:18:39.862 "adrfam": "IPv4", 00:18:39.862 "traddr": "10.0.0.1", 00:18:39.862 "trsvcid": "37752" 00:18:39.862 }, 00:18:39.862 "auth": { 00:18:39.862 "state": "completed", 00:18:39.862 "digest": "sha384", 00:18:39.862 "dhgroup": "ffdhe3072" 00:18:39.862 } 00:18:39.862 } 00:18:39.862 ]' 00:18:39.862 13:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:39.862 13:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.862 13:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:39.862 13:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:39.862 13:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:39.862 13:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.862 13:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.862 13:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.120 13:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:03:MGEwNGIwMGY0ZjJjODQwZWI0MDg3M2ZkOTdjNzQxYWQ3NzM4MDVlYzc1ZDI5OGI1MTM1NzQxZTdmZTgzZGE0N0/ghaY=: 00:18:40.685 13:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.685 13:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:40.685 13:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.685 13:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.685 13:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.685 13:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:40.685 13:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:40.685 13:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:40.685 13:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:40.943 13:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:18:40.943 13:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:40.943 13:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:40.943 13:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:40.943 13:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:40.943 13:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key0 00:18:40.943 13:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.943 13:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.943 13:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.943 13:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:40.943 13:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:41.508 00:18:41.508 13:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:41.508 13:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:41.508 13:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.765 13:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.765 13:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.765 13:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.765 13:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.765 13:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.766 13:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:41.766 { 00:18:41.766 "cntlid": 73, 00:18:41.766 "qid": 0, 00:18:41.766 "state": "enabled", 00:18:41.766 "listen_address": { 00:18:41.766 "trtype": "TCP", 00:18:41.766 "adrfam": "IPv4", 00:18:41.766 "traddr": "10.0.0.2", 00:18:41.766 "trsvcid": "4420" 00:18:41.766 }, 00:18:41.766 "peer_address": { 00:18:41.766 "trtype": "TCP", 00:18:41.766 "adrfam": "IPv4", 00:18:41.766 "traddr": "10.0.0.1", 00:18:41.766 "trsvcid": "37774" 00:18:41.766 }, 00:18:41.766 "auth": { 00:18:41.766 "state": "completed", 00:18:41.766 "digest": "sha384", 00:18:41.766 "dhgroup": "ffdhe4096" 00:18:41.766 } 00:18:41.766 } 00:18:41.766 ]' 00:18:41.766 13:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:41.766 13:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.766 13:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:41.766 13:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:41.766 13:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:41.766 13:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.766 13:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.766 13:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.023 13:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:00:MWE3NzBjNTAwNzQ5ODVhMTk0OGY1MGY4YzkwNmFhMjc2ODNmMzcyZjBkM2VjMzY17CoG/w==: 00:18:42.955 13:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.955 13:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:42.955 13:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.955 13:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.955 13:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.955 13:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:42.955 13:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:42.955 13:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:43.213 13:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:18:43.213 13:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:43.213 13:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:43.213 13:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:43.213 13:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:43.213 13:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key1 00:18:43.213 13:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.213 13:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.213 13:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.213 13:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:43.213 13:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:43.776 00:18:43.776 13:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:43.776 13:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.776 13:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:44.034 13:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.034 13:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.034 13:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.034 13:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.034 13:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.034 13:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:44.034 { 00:18:44.034 "cntlid": 75, 00:18:44.034 "qid": 0, 00:18:44.034 "state": "enabled", 00:18:44.034 "listen_address": { 00:18:44.034 "trtype": "TCP", 00:18:44.034 "adrfam": "IPv4", 00:18:44.034 "traddr": "10.0.0.2", 00:18:44.034 "trsvcid": "4420" 00:18:44.034 }, 00:18:44.034 "peer_address": { 00:18:44.034 "trtype": "TCP", 00:18:44.034 "adrfam": "IPv4", 00:18:44.034 "traddr": "10.0.0.1", 00:18:44.034 "trsvcid": "37622" 00:18:44.034 }, 00:18:44.034 "auth": { 00:18:44.034 "state": "completed", 00:18:44.034 "digest": "sha384", 00:18:44.034 "dhgroup": "ffdhe4096" 00:18:44.034 } 00:18:44.034 } 00:18:44.034 ]' 00:18:44.034 13:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:44.034 13:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.034 13:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:44.034 13:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:44.034 13:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:44.034 13:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.034 13:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.034 13:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.599 13:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:01:ZTgwMGJlYWYxZTQxNzg0ZTBlZDg5YzM4ZjU3ZDE1MjSsT9JX: 00:18:45.163 13:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.163 13:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:45.163 13:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.164 13:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.164 13:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.164 13:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:45.164 13:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:45.164 13:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:45.421 13:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:18:45.421 13:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:45.421 13:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:45.421 13:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:45.421 13:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:45.421 13:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key2 00:18:45.421 13:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.421 13:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.421 13:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.421 13:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:45.421 13:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:45.986 00:18:45.986 13:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:45.986 13:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:45.986 13:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.245 13:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.245 13:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.245 13:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.245 13:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.245 13:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.245 13:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:46.245 { 00:18:46.245 "cntlid": 77, 00:18:46.245 "qid": 0, 00:18:46.245 "state": "enabled", 00:18:46.245 "listen_address": { 00:18:46.245 "trtype": "TCP", 00:18:46.245 "adrfam": "IPv4", 00:18:46.245 "traddr": "10.0.0.2", 00:18:46.245 "trsvcid": "4420" 00:18:46.245 }, 00:18:46.245 "peer_address": { 00:18:46.245 "trtype": "TCP", 00:18:46.245 "adrfam": "IPv4", 00:18:46.245 "traddr": "10.0.0.1", 00:18:46.245 "trsvcid": "37648" 00:18:46.245 }, 00:18:46.245 "auth": { 00:18:46.245 "state": "completed", 00:18:46.245 "digest": "sha384", 00:18:46.245 "dhgroup": "ffdhe4096" 00:18:46.245 } 00:18:46.245 } 00:18:46.245 ]' 00:18:46.245 13:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:46.245 13:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.245 13:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:46.245 13:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:46.245 13:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:46.503 13:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.503 13:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.503 13:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.761 13:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:02:YjFmNjdjYjNmOWJiMmE4NzMyOTVmZTcyNWU0M2NjNWFjMDI0YTVmNjBjYjA3ODJlRYRf4g==: 00:18:47.328 13:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.328 13:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:47.328 13:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.328 13:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.328 13:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.328 13:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:47.328 13:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:47.328 13:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:47.585 13:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:18:47.585 13:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:47.585 13:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:47.585 13:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:47.585 13:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:47.585 13:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key3 00:18:47.585 13:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.585 13:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.842 13:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.842 13:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.842 13:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.100 00:18:48.100 13:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:48.100 13:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.100 13:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:48.357 13:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.357 13:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.357 13:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.357 13:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.357 13:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.357 13:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:48.357 { 00:18:48.357 "cntlid": 79, 00:18:48.357 "qid": 0, 00:18:48.357 "state": "enabled", 00:18:48.357 "listen_address": { 00:18:48.357 "trtype": "TCP", 00:18:48.357 "adrfam": "IPv4", 00:18:48.357 "traddr": "10.0.0.2", 00:18:48.357 "trsvcid": "4420" 00:18:48.357 }, 00:18:48.357 "peer_address": { 00:18:48.357 "trtype": "TCP", 00:18:48.357 "adrfam": "IPv4", 00:18:48.357 "traddr": "10.0.0.1", 00:18:48.357 "trsvcid": "37682" 00:18:48.357 }, 00:18:48.357 "auth": { 00:18:48.357 "state": "completed", 00:18:48.357 "digest": "sha384", 00:18:48.357 "dhgroup": "ffdhe4096" 00:18:48.357 } 00:18:48.357 } 00:18:48.357 ]' 00:18:48.357 13:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:48.614 13:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.614 13:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:48.614 13:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:48.614 13:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:48.614 13:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.614 13:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.614 13:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.023 13:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:03:MGEwNGIwMGY0ZjJjODQwZWI0MDg3M2ZkOTdjNzQxYWQ3NzM4MDVlYzc1ZDI5OGI1MTM1NzQxZTdmZTgzZGE0N0/ghaY=: 00:18:49.954 13:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.954 13:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:49.954 13:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.954 13:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.954 13:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.954 13:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.954 13:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:49.954 13:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:49.954 13:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:49.954 13:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:18:49.954 13:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:49.954 13:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:49.954 13:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:49.954 13:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:49.954 13:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key0 00:18:49.954 13:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.954 13:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.210 13:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.210 13:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:50.210 13:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:50.772 00:18:50.772 13:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:50.772 13:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.772 13:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:51.088 13:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.088 13:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.088 13:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.088 13:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.088 13:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.088 13:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:51.088 { 00:18:51.088 "cntlid": 81, 00:18:51.088 "qid": 0, 00:18:51.088 "state": "enabled", 00:18:51.088 "listen_address": { 00:18:51.088 "trtype": "TCP", 00:18:51.088 "adrfam": "IPv4", 00:18:51.088 "traddr": "10.0.0.2", 00:18:51.088 "trsvcid": "4420" 00:18:51.088 }, 00:18:51.088 "peer_address": { 00:18:51.088 "trtype": "TCP", 00:18:51.088 "adrfam": "IPv4", 00:18:51.088 "traddr": "10.0.0.1", 00:18:51.088 "trsvcid": "37708" 00:18:51.088 }, 00:18:51.088 "auth": { 00:18:51.088 "state": "completed", 00:18:51.088 "digest": "sha384", 00:18:51.088 "dhgroup": "ffdhe6144" 00:18:51.088 } 00:18:51.088 } 00:18:51.088 ]' 00:18:51.088 13:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:51.088 13:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.088 13:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:51.088 13:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:51.088 13:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:51.088 13:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.088 13:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.088 13:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.655 13:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:00:MWE3NzBjNTAwNzQ5ODVhMTk0OGY1MGY4YzkwNmFhMjc2ODNmMzcyZjBkM2VjMzY17CoG/w==: 00:18:52.221 13:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.221 13:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:52.221 13:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.221 13:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.221 13:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.221 13:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:52.221 13:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:52.221 13:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:52.479 13:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:18:52.479 13:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:52.479 13:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:52.479 13:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:52.479 13:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:52.479 13:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key1 00:18:52.479 13:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.479 13:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.479 13:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.479 13:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:52.479 13:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:53.045 00:18:53.045 13:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:53.045 13:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.045 13:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:53.315 13:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.315 13:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.315 13:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.315 13:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.315 13:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.315 13:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:53.315 { 00:18:53.315 "cntlid": 83, 00:18:53.315 "qid": 0, 00:18:53.315 "state": "enabled", 00:18:53.315 "listen_address": { 00:18:53.315 "trtype": "TCP", 00:18:53.315 "adrfam": "IPv4", 00:18:53.315 "traddr": "10.0.0.2", 00:18:53.315 "trsvcid": "4420" 00:18:53.315 }, 00:18:53.315 "peer_address": { 00:18:53.315 "trtype": "TCP", 00:18:53.315 "adrfam": "IPv4", 00:18:53.315 "traddr": "10.0.0.1", 00:18:53.315 "trsvcid": "46652" 00:18:53.315 }, 00:18:53.315 "auth": { 00:18:53.315 "state": "completed", 00:18:53.315 "digest": "sha384", 00:18:53.315 "dhgroup": "ffdhe6144" 00:18:53.315 } 00:18:53.315 } 00:18:53.315 ]' 00:18:53.315 13:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:53.315 13:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.316 13:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:53.316 13:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:53.316 13:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:53.573 13:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.573 13:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.573 13:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.831 13:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:01:ZTgwMGJlYWYxZTQxNzg0ZTBlZDg5YzM4ZjU3ZDE1MjSsT9JX: 00:18:54.396 13:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.396 13:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:54.396 13:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.396 13:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.396 13:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.396 13:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:54.396 13:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:54.396 13:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:54.654 13:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:18:54.654 13:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:54.654 13:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:54.654 13:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:54.654 13:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:54.654 13:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key2 00:18:54.654 13:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.654 13:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.654 13:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.654 13:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:54.654 13:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:55.222 00:18:55.222 13:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:55.222 13:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:55.222 13:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.480 13:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.480 13:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.480 13:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.480 13:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.480 13:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.480 13:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:55.480 { 00:18:55.480 "cntlid": 85, 00:18:55.480 "qid": 0, 00:18:55.480 "state": "enabled", 00:18:55.480 "listen_address": { 00:18:55.480 "trtype": "TCP", 00:18:55.480 "adrfam": "IPv4", 00:18:55.480 "traddr": "10.0.0.2", 00:18:55.480 "trsvcid": "4420" 00:18:55.480 }, 00:18:55.480 "peer_address": { 00:18:55.480 "trtype": "TCP", 00:18:55.480 "adrfam": "IPv4", 00:18:55.480 "traddr": "10.0.0.1", 00:18:55.480 "trsvcid": "46678" 00:18:55.480 }, 00:18:55.480 "auth": { 00:18:55.480 "state": "completed", 00:18:55.480 "digest": "sha384", 00:18:55.480 "dhgroup": "ffdhe6144" 00:18:55.480 } 00:18:55.480 } 00:18:55.480 ]' 00:18:55.480 13:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:55.480 13:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.480 13:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:55.737 13:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:55.737 13:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:55.737 13:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.737 13:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.737 13:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.995 13:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:02:YjFmNjdjYjNmOWJiMmE4NzMyOTVmZTcyNWU0M2NjNWFjMDI0YTVmNjBjYjA3ODJlRYRf4g==: 00:18:56.925 13:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.925 13:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:56.925 13:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.925 13:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.925 13:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.925 13:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:56.925 13:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:56.925 13:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:57.182 13:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:18:57.182 13:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:57.182 13:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:57.182 13:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:57.182 13:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:57.182 13:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key3 00:18:57.182 13:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.182 13:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.182 13:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.182 13:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:57.182 13:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:57.759 00:18:57.759 13:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:57.759 13:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.759 13:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:58.041 13:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.041 13:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.041 13:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.041 13:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.041 13:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.041 13:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:58.041 { 00:18:58.041 "cntlid": 87, 00:18:58.041 "qid": 0, 00:18:58.041 "state": "enabled", 00:18:58.041 "listen_address": { 00:18:58.041 "trtype": "TCP", 00:18:58.041 "adrfam": "IPv4", 00:18:58.041 "traddr": "10.0.0.2", 00:18:58.041 "trsvcid": "4420" 00:18:58.041 }, 00:18:58.041 "peer_address": { 00:18:58.041 "trtype": "TCP", 00:18:58.041 "adrfam": "IPv4", 00:18:58.041 "traddr": "10.0.0.1", 00:18:58.041 "trsvcid": "46712" 00:18:58.041 }, 00:18:58.041 "auth": { 00:18:58.041 "state": "completed", 00:18:58.042 "digest": "sha384", 00:18:58.042 "dhgroup": "ffdhe6144" 00:18:58.042 } 00:18:58.042 } 00:18:58.042 ]' 00:18:58.042 13:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:58.042 13:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.042 13:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:58.042 13:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:58.042 13:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:58.042 13:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.042 13:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.042 13:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.605 13:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:03:MGEwNGIwMGY0ZjJjODQwZWI0MDg3M2ZkOTdjNzQxYWQ3NzM4MDVlYzc1ZDI5OGI1MTM1NzQxZTdmZTgzZGE0N0/ghaY=: 00:18:59.170 13:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.170 13:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:18:59.170 13:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.170 13:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.170 13:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.170 13:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.170 13:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:59.170 13:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:59.170 13:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:59.428 13:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:18:59.428 13:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:59.428 13:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:59.428 13:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:59.428 13:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:59.428 13:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key0 00:18:59.428 13:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.428 13:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.428 13:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.428 13:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:59.428 13:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:59.995 00:18:59.995 13:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:59.995 13:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:59.995 13:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.253 13:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.253 13:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.253 13:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.253 13:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.253 13:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.253 13:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:00.253 { 00:19:00.253 "cntlid": 89, 00:19:00.253 "qid": 0, 00:19:00.253 "state": "enabled", 00:19:00.253 "listen_address": { 00:19:00.253 "trtype": "TCP", 00:19:00.253 "adrfam": "IPv4", 00:19:00.253 "traddr": "10.0.0.2", 00:19:00.253 "trsvcid": "4420" 00:19:00.253 }, 00:19:00.253 "peer_address": { 00:19:00.253 "trtype": "TCP", 00:19:00.253 "adrfam": "IPv4", 00:19:00.253 "traddr": "10.0.0.1", 00:19:00.253 "trsvcid": "46742" 00:19:00.253 }, 00:19:00.253 "auth": { 00:19:00.253 "state": "completed", 00:19:00.253 "digest": "sha384", 00:19:00.253 "dhgroup": "ffdhe8192" 00:19:00.253 } 00:19:00.253 } 00:19:00.253 ]' 00:19:00.253 13:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:00.253 13:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:00.253 13:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:00.253 13:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:00.253 13:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:00.253 13:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.253 13:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.253 13:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.511 13:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:00:MWE3NzBjNTAwNzQ5ODVhMTk0OGY1MGY4YzkwNmFhMjc2ODNmMzcyZjBkM2VjMzY17CoG/w==: 00:19:01.079 13:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.079 13:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:01.079 13:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.079 13:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.079 13:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.079 13:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:01.079 13:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:01.079 13:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:01.646 13:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:19:01.646 13:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:01.646 13:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:01.646 13:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:01.646 13:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:01.646 13:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key1 00:19:01.646 13:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.646 13:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.646 13:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.646 13:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:01.646 13:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:02.211 00:19:02.211 13:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:02.211 13:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.211 13:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:02.470 13:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.470 13:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.470 13:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.470 13:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.470 13:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.470 13:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:02.470 { 00:19:02.470 "cntlid": 91, 00:19:02.470 "qid": 0, 00:19:02.470 "state": "enabled", 00:19:02.470 "listen_address": { 00:19:02.470 "trtype": "TCP", 00:19:02.470 "adrfam": "IPv4", 00:19:02.470 "traddr": "10.0.0.2", 00:19:02.470 "trsvcid": "4420" 00:19:02.470 }, 00:19:02.470 "peer_address": { 00:19:02.470 "trtype": "TCP", 00:19:02.470 "adrfam": "IPv4", 00:19:02.470 "traddr": "10.0.0.1", 00:19:02.470 "trsvcid": "46770" 00:19:02.470 }, 00:19:02.470 "auth": { 00:19:02.470 "state": "completed", 00:19:02.470 "digest": "sha384", 00:19:02.470 "dhgroup": "ffdhe8192" 00:19:02.470 } 00:19:02.470 } 00:19:02.470 ]' 00:19:02.470 13:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:02.470 13:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:02.470 13:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:02.470 13:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:02.470 13:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:02.470 13:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.470 13:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.470 13:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.729 13:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:01:ZTgwMGJlYWYxZTQxNzg0ZTBlZDg5YzM4ZjU3ZDE1MjSsT9JX: 00:19:03.662 13:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.662 13:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:03.662 13:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.662 13:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.662 13:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.662 13:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:03.662 13:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:03.662 13:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:03.921 13:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:19:03.921 13:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:03.921 13:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:03.921 13:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:03.921 13:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:03.921 13:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key2 00:19:03.921 13:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.921 13:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.921 13:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.921 13:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:03.921 13:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:04.487 00:19:04.487 13:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:04.487 13:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.487 13:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:04.745 13:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.745 13:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.745 13:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.745 13:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.745 13:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.745 13:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:04.745 { 00:19:04.745 "cntlid": 93, 00:19:04.745 "qid": 0, 00:19:04.745 "state": "enabled", 00:19:04.745 "listen_address": { 00:19:04.745 "trtype": "TCP", 00:19:04.745 "adrfam": "IPv4", 00:19:04.745 "traddr": "10.0.0.2", 00:19:04.745 "trsvcid": "4420" 00:19:04.745 }, 00:19:04.745 "peer_address": { 00:19:04.745 "trtype": "TCP", 00:19:04.745 "adrfam": "IPv4", 00:19:04.745 "traddr": "10.0.0.1", 00:19:04.745 "trsvcid": "42234" 00:19:04.745 }, 00:19:04.745 "auth": { 00:19:04.745 "state": "completed", 00:19:04.745 "digest": "sha384", 00:19:04.745 "dhgroup": "ffdhe8192" 00:19:04.745 } 00:19:04.745 } 00:19:04.745 ]' 00:19:04.745 13:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:04.745 13:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:04.745 13:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:05.002 13:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:05.002 13:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:05.002 13:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.002 13:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.002 13:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.260 13:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:02:YjFmNjdjYjNmOWJiMmE4NzMyOTVmZTcyNWU0M2NjNWFjMDI0YTVmNjBjYjA3ODJlRYRf4g==: 00:19:05.827 13:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.827 13:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:05.827 13:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.827 13:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.827 13:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.827 13:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:05.827 13:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:05.828 13:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:06.085 13:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:19:06.085 13:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:06.085 13:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:06.085 13:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:06.085 13:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:06.085 13:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key3 00:19:06.085 13:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.085 13:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.085 13:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.085 13:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.085 13:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.651 00:19:06.651 13:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:06.651 13:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.651 13:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:06.981 13:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.981 13:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.981 13:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.981 13:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.981 13:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.981 13:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:06.981 { 00:19:06.981 "cntlid": 95, 00:19:06.981 "qid": 0, 00:19:06.981 "state": "enabled", 00:19:06.981 "listen_address": { 00:19:06.981 "trtype": "TCP", 00:19:06.981 "adrfam": "IPv4", 00:19:06.981 "traddr": "10.0.0.2", 00:19:06.981 "trsvcid": "4420" 00:19:06.981 }, 00:19:06.981 "peer_address": { 00:19:06.981 "trtype": "TCP", 00:19:06.981 "adrfam": "IPv4", 00:19:06.981 "traddr": "10.0.0.1", 00:19:06.981 "trsvcid": "42258" 00:19:06.981 }, 00:19:06.981 "auth": { 00:19:06.981 "state": "completed", 00:19:06.981 "digest": "sha384", 00:19:06.981 "dhgroup": "ffdhe8192" 00:19:06.981 } 00:19:06.981 } 00:19:06.981 ]' 00:19:06.981 13:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:06.981 13:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:06.981 13:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:06.981 13:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:06.981 13:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:06.981 13:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.981 13:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.981 13:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.239 13:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:03:MGEwNGIwMGY0ZjJjODQwZWI0MDg3M2ZkOTdjNzQxYWQ3NzM4MDVlYzc1ZDI5OGI1MTM1NzQxZTdmZTgzZGE0N0/ghaY=: 00:19:08.171 13:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.171 13:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:08.171 13:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.171 13:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.171 13:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.172 13:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:19:08.172 13:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:08.172 13:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:08.172 13:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:08.172 13:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:08.430 13:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:19:08.430 13:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:08.430 13:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:08.430 13:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:08.430 13:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:08.430 13:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key0 00:19:08.430 13:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.430 13:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.430 13:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.430 13:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:08.430 13:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:08.688 00:19:08.688 13:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:08.688 13:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:08.688 13:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.946 13:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.946 13:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.946 13:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.946 13:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.946 13:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.946 13:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:08.946 { 00:19:08.946 "cntlid": 97, 00:19:08.946 "qid": 0, 00:19:08.946 "state": "enabled", 00:19:08.946 "listen_address": { 00:19:08.946 "trtype": "TCP", 00:19:08.946 "adrfam": "IPv4", 00:19:08.946 "traddr": "10.0.0.2", 00:19:08.946 "trsvcid": "4420" 00:19:08.946 }, 00:19:08.946 "peer_address": { 00:19:08.946 "trtype": "TCP", 00:19:08.946 "adrfam": "IPv4", 00:19:08.946 "traddr": "10.0.0.1", 00:19:08.946 "trsvcid": "42284" 00:19:08.946 }, 00:19:08.946 "auth": { 00:19:08.946 "state": "completed", 00:19:08.946 "digest": "sha512", 00:19:08.946 "dhgroup": "null" 00:19:08.946 } 00:19:08.946 } 00:19:08.946 ]' 00:19:08.946 13:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:08.946 13:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.946 13:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:09.203 13:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:09.203 13:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:09.203 13:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.203 13:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.203 13:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.472 13:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:00:MWE3NzBjNTAwNzQ5ODVhMTk0OGY1MGY4YzkwNmFhMjc2ODNmMzcyZjBkM2VjMzY17CoG/w==: 00:19:10.038 13:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.038 13:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:10.038 13:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.038 13:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.038 13:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.038 13:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:10.038 13:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:10.038 13:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:10.296 13:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:19:10.296 13:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:10.296 13:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:10.296 13:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:10.296 13:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:10.296 13:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key1 00:19:10.296 13:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.296 13:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.296 13:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.296 13:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:10.296 13:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:10.554 00:19:10.811 13:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:10.811 13:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.811 13:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:10.811 13:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.811 13:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.811 13:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.811 13:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.811 13:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.811 13:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:10.811 { 00:19:10.811 "cntlid": 99, 00:19:10.811 "qid": 0, 00:19:10.811 "state": "enabled", 00:19:10.811 "listen_address": { 00:19:10.811 "trtype": "TCP", 00:19:10.811 "adrfam": "IPv4", 00:19:10.811 "traddr": "10.0.0.2", 00:19:10.811 "trsvcid": "4420" 00:19:10.811 }, 00:19:10.811 "peer_address": { 00:19:10.811 "trtype": "TCP", 00:19:10.811 "adrfam": "IPv4", 00:19:10.811 "traddr": "10.0.0.1", 00:19:10.811 "trsvcid": "42322" 00:19:10.811 }, 00:19:10.811 "auth": { 00:19:10.811 "state": "completed", 00:19:10.811 "digest": "sha512", 00:19:10.811 "dhgroup": "null" 00:19:10.811 } 00:19:10.811 } 00:19:10.811 ]' 00:19:10.811 13:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:11.076 13:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.076 13:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:11.077 13:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:11.077 13:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:11.077 13:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.077 13:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.077 13:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.395 13:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:01:ZTgwMGJlYWYxZTQxNzg0ZTBlZDg5YzM4ZjU3ZDE1MjSsT9JX: 00:19:11.962 13:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.962 13:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:11.962 13:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.962 13:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.962 13:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.962 13:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:11.962 13:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:11.962 13:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:12.219 13:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:19:12.219 13:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:12.219 13:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:12.219 13:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:12.219 13:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:12.219 13:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key2 00:19:12.219 13:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.219 13:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.219 13:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.219 13:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:12.220 13:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:12.478 00:19:12.478 13:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:12.478 13:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.478 13:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:12.736 13:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.736 13:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.736 13:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.736 13:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.736 13:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.736 13:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:12.736 { 00:19:12.736 "cntlid": 101, 00:19:12.736 "qid": 0, 00:19:12.736 "state": "enabled", 00:19:12.736 "listen_address": { 00:19:12.736 "trtype": "TCP", 00:19:12.736 "adrfam": "IPv4", 00:19:12.736 "traddr": "10.0.0.2", 00:19:12.736 "trsvcid": "4420" 00:19:12.736 }, 00:19:12.736 "peer_address": { 00:19:12.736 "trtype": "TCP", 00:19:12.736 "adrfam": "IPv4", 00:19:12.736 "traddr": "10.0.0.1", 00:19:12.736 "trsvcid": "45186" 00:19:12.736 }, 00:19:12.736 "auth": { 00:19:12.736 "state": "completed", 00:19:12.736 "digest": "sha512", 00:19:12.736 "dhgroup": "null" 00:19:12.736 } 00:19:12.736 } 00:19:12.736 ]' 00:19:12.736 13:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:12.736 13:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.736 13:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:12.995 13:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:12.995 13:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:12.995 13:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.995 13:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.995 13:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.253 13:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:02:YjFmNjdjYjNmOWJiMmE4NzMyOTVmZTcyNWU0M2NjNWFjMDI0YTVmNjBjYjA3ODJlRYRf4g==: 00:19:13.825 13:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.825 13:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:13.825 13:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.825 13:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.825 13:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.825 13:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:13.825 13:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:13.825 13:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:14.092 13:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:19:14.092 13:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:14.092 13:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:14.092 13:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:14.092 13:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:14.092 13:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key3 00:19:14.092 13:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.092 13:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.092 13:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.092 13:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.092 13:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:14.362 00:19:14.362 13:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:14.362 13:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:14.362 13:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.957 13:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.957 13:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.957 13:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.957 13:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.957 13:37:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.957 13:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:14.957 { 00:19:14.957 "cntlid": 103, 00:19:14.957 "qid": 0, 00:19:14.957 "state": "enabled", 00:19:14.958 "listen_address": { 00:19:14.958 "trtype": "TCP", 00:19:14.958 "adrfam": "IPv4", 00:19:14.958 "traddr": "10.0.0.2", 00:19:14.958 "trsvcid": "4420" 00:19:14.958 }, 00:19:14.958 "peer_address": { 00:19:14.958 "trtype": "TCP", 00:19:14.958 "adrfam": "IPv4", 00:19:14.958 "traddr": "10.0.0.1", 00:19:14.958 "trsvcid": "45216" 00:19:14.958 }, 00:19:14.958 "auth": { 00:19:14.958 "state": "completed", 00:19:14.958 "digest": "sha512", 00:19:14.958 "dhgroup": "null" 00:19:14.958 } 00:19:14.958 } 00:19:14.958 ]' 00:19:14.958 13:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:14.958 13:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.958 13:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:14.958 13:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:14.958 13:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:14.958 13:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.958 13:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.958 13:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.234 13:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:03:MGEwNGIwMGY0ZjJjODQwZWI0MDg3M2ZkOTdjNzQxYWQ3NzM4MDVlYzc1ZDI5OGI1MTM1NzQxZTdmZTgzZGE0N0/ghaY=: 00:19:15.819 13:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.819 13:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:15.819 13:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.819 13:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.819 13:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.819 13:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.819 13:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:15.819 13:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:15.819 13:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:16.078 13:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:19:16.078 13:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:16.078 13:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:16.078 13:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:16.078 13:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:16.078 13:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key0 00:19:16.078 13:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.078 13:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.078 13:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.078 13:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:16.078 13:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:16.646 00:19:16.646 13:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:16.646 13:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:16.647 13:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.905 13:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.905 13:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.905 13:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.905 13:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.905 13:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.905 13:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:16.905 { 00:19:16.905 "cntlid": 105, 00:19:16.905 "qid": 0, 00:19:16.905 "state": "enabled", 00:19:16.905 "listen_address": { 00:19:16.905 "trtype": "TCP", 00:19:16.905 "adrfam": "IPv4", 00:19:16.905 "traddr": "10.0.0.2", 00:19:16.905 "trsvcid": "4420" 00:19:16.905 }, 00:19:16.905 "peer_address": { 00:19:16.905 "trtype": "TCP", 00:19:16.905 "adrfam": "IPv4", 00:19:16.905 "traddr": "10.0.0.1", 00:19:16.905 "trsvcid": "45244" 00:19:16.905 }, 00:19:16.905 "auth": { 00:19:16.905 "state": "completed", 00:19:16.905 "digest": "sha512", 00:19:16.905 "dhgroup": "ffdhe2048" 00:19:16.905 } 00:19:16.905 } 00:19:16.905 ]' 00:19:16.905 13:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:16.905 13:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.905 13:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:17.163 13:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:17.163 13:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:17.163 13:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.163 13:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.163 13:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.422 13:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:00:MWE3NzBjNTAwNzQ5ODVhMTk0OGY1MGY4YzkwNmFhMjc2ODNmMzcyZjBkM2VjMzY17CoG/w==: 00:19:17.988 13:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.988 13:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:17.988 13:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.988 13:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.988 13:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.988 13:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:17.988 13:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:17.988 13:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:18.246 13:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:19:18.246 13:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:18.246 13:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:18.505 13:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:18.505 13:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:18.505 13:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key1 00:19:18.505 13:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.505 13:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.505 13:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.505 13:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:18.505 13:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:18.792 00:19:18.792 13:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:18.792 13:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:18.792 13:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.050 13:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.050 13:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.050 13:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.050 13:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.050 13:37:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.050 13:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:19.050 { 00:19:19.050 "cntlid": 107, 00:19:19.050 "qid": 0, 00:19:19.050 "state": "enabled", 00:19:19.050 "listen_address": { 00:19:19.050 "trtype": "TCP", 00:19:19.050 "adrfam": "IPv4", 00:19:19.050 "traddr": "10.0.0.2", 00:19:19.050 "trsvcid": "4420" 00:19:19.050 }, 00:19:19.050 "peer_address": { 00:19:19.050 "trtype": "TCP", 00:19:19.050 "adrfam": "IPv4", 00:19:19.050 "traddr": "10.0.0.1", 00:19:19.050 "trsvcid": "45270" 00:19:19.050 }, 00:19:19.050 "auth": { 00:19:19.050 "state": "completed", 00:19:19.050 "digest": "sha512", 00:19:19.050 "dhgroup": "ffdhe2048" 00:19:19.050 } 00:19:19.050 } 00:19:19.050 ]' 00:19:19.050 13:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:19.050 13:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.050 13:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:19.050 13:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:19.050 13:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:19.308 13:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.308 13:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.308 13:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.566 13:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:01:ZTgwMGJlYWYxZTQxNzg0ZTBlZDg5YzM4ZjU3ZDE1MjSsT9JX: 00:19:20.132 13:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.132 13:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:20.132 13:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.132 13:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.132 13:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.132 13:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:20.132 13:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:20.133 13:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:20.698 13:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:19:20.698 13:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:20.698 13:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:20.698 13:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:20.698 13:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:20.698 13:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key2 00:19:20.698 13:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.698 13:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.698 13:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.698 13:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:20.698 13:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:20.956 00:19:20.956 13:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:20.956 13:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.956 13:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:21.214 13:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.214 13:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.214 13:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.214 13:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.214 13:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.214 13:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:21.214 { 00:19:21.214 "cntlid": 109, 00:19:21.214 "qid": 0, 00:19:21.214 "state": "enabled", 00:19:21.214 "listen_address": { 00:19:21.214 "trtype": "TCP", 00:19:21.214 "adrfam": "IPv4", 00:19:21.214 "traddr": "10.0.0.2", 00:19:21.214 "trsvcid": "4420" 00:19:21.214 }, 00:19:21.214 "peer_address": { 00:19:21.214 "trtype": "TCP", 00:19:21.214 "adrfam": "IPv4", 00:19:21.214 "traddr": "10.0.0.1", 00:19:21.214 "trsvcid": "45298" 00:19:21.214 }, 00:19:21.214 "auth": { 00:19:21.214 "state": "completed", 00:19:21.214 "digest": "sha512", 00:19:21.214 "dhgroup": "ffdhe2048" 00:19:21.214 } 00:19:21.214 } 00:19:21.214 ]' 00:19:21.214 13:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:21.214 13:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.214 13:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:21.214 13:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:21.214 13:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:21.214 13:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.214 13:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.214 13:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.471 13:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:02:YjFmNjdjYjNmOWJiMmE4NzMyOTVmZTcyNWU0M2NjNWFjMDI0YTVmNjBjYjA3ODJlRYRf4g==: 00:19:22.404 13:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.404 13:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:22.404 13:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.404 13:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.404 13:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.404 13:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:22.404 13:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:22.404 13:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:22.661 13:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:19:22.661 13:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:22.661 13:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:22.661 13:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:22.661 13:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:22.661 13:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key3 00:19:22.661 13:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.661 13:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.661 13:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.661 13:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.661 13:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.919 00:19:22.919 13:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:22.919 13:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.919 13:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:23.179 13:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.179 13:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.179 13:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.179 13:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.179 13:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.179 13:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:23.179 { 00:19:23.179 "cntlid": 111, 00:19:23.179 "qid": 0, 00:19:23.179 "state": "enabled", 00:19:23.179 "listen_address": { 00:19:23.179 "trtype": "TCP", 00:19:23.179 "adrfam": "IPv4", 00:19:23.179 "traddr": "10.0.0.2", 00:19:23.179 "trsvcid": "4420" 00:19:23.179 }, 00:19:23.179 "peer_address": { 00:19:23.179 "trtype": "TCP", 00:19:23.179 "adrfam": "IPv4", 00:19:23.179 "traddr": "10.0.0.1", 00:19:23.179 "trsvcid": "42458" 00:19:23.179 }, 00:19:23.179 "auth": { 00:19:23.179 "state": "completed", 00:19:23.179 "digest": "sha512", 00:19:23.179 "dhgroup": "ffdhe2048" 00:19:23.179 } 00:19:23.179 } 00:19:23.179 ]' 00:19:23.179 13:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:23.179 13:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.179 13:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:23.179 13:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:23.179 13:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:23.179 13:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.179 13:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.179 13:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.437 13:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:03:MGEwNGIwMGY0ZjJjODQwZWI0MDg3M2ZkOTdjNzQxYWQ3NzM4MDVlYzc1ZDI5OGI1MTM1NzQxZTdmZTgzZGE0N0/ghaY=: 00:19:24.002 13:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.002 13:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:24.002 13:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.002 13:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.002 13:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.002 13:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:24.002 13:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:24.002 13:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:24.002 13:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:24.260 13:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:19:24.260 13:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:24.260 13:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:24.260 13:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:24.260 13:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:24.260 13:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key0 00:19:24.260 13:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.260 13:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.260 13:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.260 13:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:24.260 13:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:24.828 00:19:24.828 13:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:24.828 13:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:24.828 13:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.086 13:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.086 13:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.086 13:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.086 13:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.086 13:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.086 13:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:25.086 { 00:19:25.086 "cntlid": 113, 00:19:25.086 "qid": 0, 00:19:25.086 "state": "enabled", 00:19:25.086 "listen_address": { 00:19:25.086 "trtype": "TCP", 00:19:25.086 "adrfam": "IPv4", 00:19:25.086 "traddr": "10.0.0.2", 00:19:25.087 "trsvcid": "4420" 00:19:25.087 }, 00:19:25.087 "peer_address": { 00:19:25.087 "trtype": "TCP", 00:19:25.087 "adrfam": "IPv4", 00:19:25.087 "traddr": "10.0.0.1", 00:19:25.087 "trsvcid": "42488" 00:19:25.087 }, 00:19:25.087 "auth": { 00:19:25.087 "state": "completed", 00:19:25.087 "digest": "sha512", 00:19:25.087 "dhgroup": "ffdhe3072" 00:19:25.087 } 00:19:25.087 } 00:19:25.087 ]' 00:19:25.087 13:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:25.087 13:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.087 13:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:25.087 13:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:25.087 13:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:25.087 13:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.087 13:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.087 13:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.344 13:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:00:MWE3NzBjNTAwNzQ5ODVhMTk0OGY1MGY4YzkwNmFhMjc2ODNmMzcyZjBkM2VjMzY17CoG/w==: 00:19:26.279 13:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.279 13:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:26.279 13:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.279 13:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.279 13:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.279 13:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:26.279 13:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:26.280 13:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:26.280 13:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:19:26.280 13:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:26.280 13:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:26.280 13:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:26.280 13:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:26.280 13:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key1 00:19:26.280 13:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.280 13:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.280 13:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.280 13:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:26.280 13:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:26.845 00:19:26.845 13:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:26.845 13:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.845 13:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:27.103 13:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.103 13:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.103 13:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.103 13:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.103 13:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.103 13:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:27.103 { 00:19:27.103 "cntlid": 115, 00:19:27.103 "qid": 0, 00:19:27.103 "state": "enabled", 00:19:27.103 "listen_address": { 00:19:27.103 "trtype": "TCP", 00:19:27.103 "adrfam": "IPv4", 00:19:27.103 "traddr": "10.0.0.2", 00:19:27.103 "trsvcid": "4420" 00:19:27.103 }, 00:19:27.103 "peer_address": { 00:19:27.103 "trtype": "TCP", 00:19:27.103 "adrfam": "IPv4", 00:19:27.103 "traddr": "10.0.0.1", 00:19:27.103 "trsvcid": "42516" 00:19:27.103 }, 00:19:27.103 "auth": { 00:19:27.103 "state": "completed", 00:19:27.103 "digest": "sha512", 00:19:27.103 "dhgroup": "ffdhe3072" 00:19:27.103 } 00:19:27.103 } 00:19:27.103 ]' 00:19:27.103 13:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:27.103 13:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.103 13:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:27.103 13:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:27.103 13:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:27.103 13:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.103 13:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.103 13:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.360 13:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:01:ZTgwMGJlYWYxZTQxNzg0ZTBlZDg5YzM4ZjU3ZDE1MjSsT9JX: 00:19:28.311 13:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.311 13:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:28.311 13:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.311 13:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.311 13:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.311 13:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:28.311 13:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:28.311 13:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:28.570 13:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:19:28.570 13:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:28.570 13:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:28.570 13:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:28.570 13:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:28.570 13:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key2 00:19:28.570 13:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.570 13:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.570 13:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.570 13:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:28.570 13:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:28.829 00:19:28.829 13:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:28.829 13:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.829 13:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:29.086 13:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.086 13:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.086 13:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.086 13:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.086 13:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.086 13:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:29.086 { 00:19:29.086 "cntlid": 117, 00:19:29.086 "qid": 0, 00:19:29.086 "state": "enabled", 00:19:29.086 "listen_address": { 00:19:29.086 "trtype": "TCP", 00:19:29.086 "adrfam": "IPv4", 00:19:29.086 "traddr": "10.0.0.2", 00:19:29.086 "trsvcid": "4420" 00:19:29.086 }, 00:19:29.086 "peer_address": { 00:19:29.086 "trtype": "TCP", 00:19:29.086 "adrfam": "IPv4", 00:19:29.086 "traddr": "10.0.0.1", 00:19:29.086 "trsvcid": "42542" 00:19:29.086 }, 00:19:29.086 "auth": { 00:19:29.086 "state": "completed", 00:19:29.086 "digest": "sha512", 00:19:29.086 "dhgroup": "ffdhe3072" 00:19:29.086 } 00:19:29.086 } 00:19:29.086 ]' 00:19:29.086 13:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:29.086 13:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.086 13:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:29.344 13:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:29.344 13:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:29.344 13:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.344 13:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.344 13:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.601 13:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:02:YjFmNjdjYjNmOWJiMmE4NzMyOTVmZTcyNWU0M2NjNWFjMDI0YTVmNjBjYjA3ODJlRYRf4g==: 00:19:30.167 13:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.167 13:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:30.167 13:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.167 13:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.167 13:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.167 13:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:30.167 13:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:30.167 13:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:30.425 13:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:19:30.425 13:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:30.425 13:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:30.425 13:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:30.425 13:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:30.425 13:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key3 00:19:30.425 13:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.425 13:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.425 13:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.425 13:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.425 13:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.682 00:19:30.682 13:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:30.682 13:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:30.682 13:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.940 13:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.940 13:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.940 13:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.940 13:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.940 13:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.940 13:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:30.940 { 00:19:30.940 "cntlid": 119, 00:19:30.940 "qid": 0, 00:19:30.940 "state": "enabled", 00:19:30.940 "listen_address": { 00:19:30.940 "trtype": "TCP", 00:19:30.940 "adrfam": "IPv4", 00:19:30.940 "traddr": "10.0.0.2", 00:19:30.940 "trsvcid": "4420" 00:19:30.940 }, 00:19:30.940 "peer_address": { 00:19:30.940 "trtype": "TCP", 00:19:30.940 "adrfam": "IPv4", 00:19:30.940 "traddr": "10.0.0.1", 00:19:30.940 "trsvcid": "42574" 00:19:30.940 }, 00:19:30.940 "auth": { 00:19:30.940 "state": "completed", 00:19:30.940 "digest": "sha512", 00:19:30.940 "dhgroup": "ffdhe3072" 00:19:30.940 } 00:19:30.940 } 00:19:30.940 ]' 00:19:30.940 13:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:31.197 13:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.197 13:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:31.197 13:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:31.197 13:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:31.197 13:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.197 13:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.197 13:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.454 13:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:03:MGEwNGIwMGY0ZjJjODQwZWI0MDg3M2ZkOTdjNzQxYWQ3NzM4MDVlYzc1ZDI5OGI1MTM1NzQxZTdmZTgzZGE0N0/ghaY=: 00:19:32.387 13:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.387 13:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:32.387 13:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.387 13:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.387 13:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.387 13:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:32.387 13:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:32.387 13:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:32.387 13:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:32.387 13:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:19:32.387 13:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:32.387 13:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:32.387 13:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:32.387 13:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:32.387 13:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key0 00:19:32.387 13:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.387 13:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.387 13:37:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.387 13:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:32.387 13:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:32.952 00:19:32.952 13:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:32.952 13:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.952 13:37:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:33.208 13:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.208 13:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.208 13:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.208 13:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.208 13:37:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.208 13:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:33.208 { 00:19:33.208 "cntlid": 121, 00:19:33.208 "qid": 0, 00:19:33.208 "state": "enabled", 00:19:33.208 "listen_address": { 00:19:33.208 "trtype": "TCP", 00:19:33.208 "adrfam": "IPv4", 00:19:33.208 "traddr": "10.0.0.2", 00:19:33.208 "trsvcid": "4420" 00:19:33.208 }, 00:19:33.208 "peer_address": { 00:19:33.209 "trtype": "TCP", 00:19:33.209 "adrfam": "IPv4", 00:19:33.209 "traddr": "10.0.0.1", 00:19:33.209 "trsvcid": "58608" 00:19:33.209 }, 00:19:33.209 "auth": { 00:19:33.209 "state": "completed", 00:19:33.209 "digest": "sha512", 00:19:33.209 "dhgroup": "ffdhe4096" 00:19:33.209 } 00:19:33.209 } 00:19:33.209 ]' 00:19:33.209 13:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:33.209 13:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.209 13:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:33.209 13:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:33.209 13:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:33.465 13:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.466 13:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.466 13:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.723 13:37:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:00:MWE3NzBjNTAwNzQ5ODVhMTk0OGY1MGY4YzkwNmFhMjc2ODNmMzcyZjBkM2VjMzY17CoG/w==: 00:19:34.287 13:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.287 13:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:34.287 13:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.287 13:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.287 13:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.287 13:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:34.287 13:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:34.287 13:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:34.545 13:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:19:34.545 13:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:34.545 13:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:34.545 13:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:34.545 13:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:34.546 13:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key1 00:19:34.546 13:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.546 13:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.546 13:37:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.546 13:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:34.546 13:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:35.111 00:19:35.111 13:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:35.111 13:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.111 13:37:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:35.368 13:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.368 13:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.368 13:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.368 13:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.368 13:37:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.368 13:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:35.368 { 00:19:35.368 "cntlid": 123, 00:19:35.368 "qid": 0, 00:19:35.368 "state": "enabled", 00:19:35.368 "listen_address": { 00:19:35.368 "trtype": "TCP", 00:19:35.368 "adrfam": "IPv4", 00:19:35.368 "traddr": "10.0.0.2", 00:19:35.368 "trsvcid": "4420" 00:19:35.368 }, 00:19:35.368 "peer_address": { 00:19:35.368 "trtype": "TCP", 00:19:35.368 "adrfam": "IPv4", 00:19:35.368 "traddr": "10.0.0.1", 00:19:35.368 "trsvcid": "58642" 00:19:35.368 }, 00:19:35.368 "auth": { 00:19:35.368 "state": "completed", 00:19:35.368 "digest": "sha512", 00:19:35.368 "dhgroup": "ffdhe4096" 00:19:35.368 } 00:19:35.368 } 00:19:35.368 ]' 00:19:35.368 13:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:35.368 13:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.368 13:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:35.368 13:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:35.368 13:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:35.368 13:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.368 13:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.368 13:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.626 13:37:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:01:ZTgwMGJlYWYxZTQxNzg0ZTBlZDg5YzM4ZjU3ZDE1MjSsT9JX: 00:19:36.561 13:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.561 13:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:36.561 13:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.561 13:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.561 13:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.561 13:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:36.561 13:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:36.561 13:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:36.561 13:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:19:36.561 13:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:36.561 13:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:36.561 13:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:36.561 13:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:36.561 13:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key2 00:19:36.561 13:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.561 13:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.561 13:37:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.561 13:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:36.561 13:37:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:37.127 00:19:37.127 13:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:37.127 13:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.127 13:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:37.385 13:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.385 13:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.385 13:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.385 13:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.385 13:37:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.385 13:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:37.385 { 00:19:37.385 "cntlid": 125, 00:19:37.385 "qid": 0, 00:19:37.385 "state": "enabled", 00:19:37.385 "listen_address": { 00:19:37.385 "trtype": "TCP", 00:19:37.385 "adrfam": "IPv4", 00:19:37.385 "traddr": "10.0.0.2", 00:19:37.385 "trsvcid": "4420" 00:19:37.385 }, 00:19:37.385 "peer_address": { 00:19:37.385 "trtype": "TCP", 00:19:37.385 "adrfam": "IPv4", 00:19:37.385 "traddr": "10.0.0.1", 00:19:37.385 "trsvcid": "58666" 00:19:37.385 }, 00:19:37.385 "auth": { 00:19:37.385 "state": "completed", 00:19:37.385 "digest": "sha512", 00:19:37.385 "dhgroup": "ffdhe4096" 00:19:37.385 } 00:19:37.385 } 00:19:37.385 ]' 00:19:37.385 13:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:37.385 13:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:37.385 13:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:37.385 13:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:37.385 13:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:37.385 13:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.385 13:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.385 13:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.643 13:37:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:02:YjFmNjdjYjNmOWJiMmE4NzMyOTVmZTcyNWU0M2NjNWFjMDI0YTVmNjBjYjA3ODJlRYRf4g==: 00:19:38.210 13:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.210 13:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:38.210 13:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.210 13:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.210 13:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.210 13:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:38.210 13:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:38.210 13:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:38.468 13:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:19:38.468 13:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:38.468 13:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:38.468 13:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:38.468 13:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:38.468 13:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key3 00:19:38.468 13:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.468 13:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.468 13:37:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.468 13:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.468 13:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:39.037 00:19:39.037 13:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:39.037 13:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:39.037 13:37:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.354 13:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.354 13:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.354 13:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.354 13:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.354 13:37:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.354 13:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:39.354 { 00:19:39.354 "cntlid": 127, 00:19:39.354 "qid": 0, 00:19:39.354 "state": "enabled", 00:19:39.354 "listen_address": { 00:19:39.354 "trtype": "TCP", 00:19:39.354 "adrfam": "IPv4", 00:19:39.354 "traddr": "10.0.0.2", 00:19:39.354 "trsvcid": "4420" 00:19:39.354 }, 00:19:39.354 "peer_address": { 00:19:39.354 "trtype": "TCP", 00:19:39.354 "adrfam": "IPv4", 00:19:39.354 "traddr": "10.0.0.1", 00:19:39.354 "trsvcid": "58692" 00:19:39.354 }, 00:19:39.354 "auth": { 00:19:39.354 "state": "completed", 00:19:39.354 "digest": "sha512", 00:19:39.354 "dhgroup": "ffdhe4096" 00:19:39.354 } 00:19:39.354 } 00:19:39.354 ]' 00:19:39.354 13:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:39.354 13:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.355 13:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:39.355 13:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:39.355 13:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:39.355 13:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.355 13:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.355 13:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.614 13:37:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:03:MGEwNGIwMGY0ZjJjODQwZWI0MDg3M2ZkOTdjNzQxYWQ3NzM4MDVlYzc1ZDI5OGI1MTM1NzQxZTdmZTgzZGE0N0/ghaY=: 00:19:40.181 13:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.181 13:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:40.181 13:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.181 13:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.181 13:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.181 13:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:40.181 13:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:40.181 13:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:40.181 13:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:40.452 13:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:19:40.453 13:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:40.453 13:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:40.453 13:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:40.453 13:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:40.453 13:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key0 00:19:40.453 13:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.453 13:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.453 13:37:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.453 13:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:40.453 13:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:41.023 00:19:41.023 13:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:41.023 13:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:41.023 13:37:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.023 13:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.023 13:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.023 13:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.023 13:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.023 13:37:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.023 13:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:41.023 { 00:19:41.023 "cntlid": 129, 00:19:41.023 "qid": 0, 00:19:41.023 "state": "enabled", 00:19:41.023 "listen_address": { 00:19:41.023 "trtype": "TCP", 00:19:41.023 "adrfam": "IPv4", 00:19:41.023 "traddr": "10.0.0.2", 00:19:41.023 "trsvcid": "4420" 00:19:41.023 }, 00:19:41.023 "peer_address": { 00:19:41.023 "trtype": "TCP", 00:19:41.023 "adrfam": "IPv4", 00:19:41.023 "traddr": "10.0.0.1", 00:19:41.023 "trsvcid": "58708" 00:19:41.023 }, 00:19:41.023 "auth": { 00:19:41.023 "state": "completed", 00:19:41.023 "digest": "sha512", 00:19:41.023 "dhgroup": "ffdhe6144" 00:19:41.023 } 00:19:41.023 } 00:19:41.023 ]' 00:19:41.023 13:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:41.282 13:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.282 13:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:41.282 13:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:41.282 13:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:41.282 13:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.282 13:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.282 13:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.541 13:37:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:00:MWE3NzBjNTAwNzQ5ODVhMTk0OGY1MGY4YzkwNmFhMjc2ODNmMzcyZjBkM2VjMzY17CoG/w==: 00:19:42.477 13:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.477 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.477 13:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:42.477 13:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.477 13:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.477 13:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.477 13:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:42.477 13:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:42.477 13:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:42.477 13:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:19:42.477 13:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:42.477 13:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:42.477 13:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:42.477 13:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:42.477 13:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key1 00:19:42.477 13:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.477 13:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.477 13:37:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.477 13:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:42.477 13:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:43.045 00:19:43.045 13:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:43.045 13:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.045 13:37:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:43.045 13:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.045 13:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.045 13:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.045 13:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.303 13:37:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.303 13:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:43.303 { 00:19:43.303 "cntlid": 131, 00:19:43.303 "qid": 0, 00:19:43.303 "state": "enabled", 00:19:43.303 "listen_address": { 00:19:43.303 "trtype": "TCP", 00:19:43.303 "adrfam": "IPv4", 00:19:43.303 "traddr": "10.0.0.2", 00:19:43.303 "trsvcid": "4420" 00:19:43.303 }, 00:19:43.303 "peer_address": { 00:19:43.303 "trtype": "TCP", 00:19:43.303 "adrfam": "IPv4", 00:19:43.303 "traddr": "10.0.0.1", 00:19:43.303 "trsvcid": "36752" 00:19:43.303 }, 00:19:43.303 "auth": { 00:19:43.303 "state": "completed", 00:19:43.303 "digest": "sha512", 00:19:43.304 "dhgroup": "ffdhe6144" 00:19:43.304 } 00:19:43.304 } 00:19:43.304 ]' 00:19:43.304 13:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:43.304 13:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.304 13:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:43.304 13:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:43.304 13:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:43.304 13:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.304 13:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.304 13:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.561 13:37:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:01:ZTgwMGJlYWYxZTQxNzg0ZTBlZDg5YzM4ZjU3ZDE1MjSsT9JX: 00:19:44.128 13:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.128 13:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:44.128 13:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.128 13:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.128 13:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.128 13:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:44.128 13:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:44.128 13:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:44.388 13:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:19:44.388 13:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:44.388 13:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:44.388 13:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:44.388 13:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:44.388 13:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key2 00:19:44.388 13:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.388 13:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.388 13:37:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.388 13:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:44.388 13:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:44.689 00:19:44.689 13:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:44.689 13:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:44.689 13:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.967 13:37:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.967 13:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.967 13:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.967 13:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.967 13:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.967 13:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:44.967 { 00:19:44.967 "cntlid": 133, 00:19:44.967 "qid": 0, 00:19:44.967 "state": "enabled", 00:19:44.967 "listen_address": { 00:19:44.967 "trtype": "TCP", 00:19:44.967 "adrfam": "IPv4", 00:19:44.967 "traddr": "10.0.0.2", 00:19:44.967 "trsvcid": "4420" 00:19:44.967 }, 00:19:44.967 "peer_address": { 00:19:44.967 "trtype": "TCP", 00:19:44.967 "adrfam": "IPv4", 00:19:44.967 "traddr": "10.0.0.1", 00:19:44.967 "trsvcid": "36772" 00:19:44.967 }, 00:19:44.967 "auth": { 00:19:44.967 "state": "completed", 00:19:44.967 "digest": "sha512", 00:19:44.967 "dhgroup": "ffdhe6144" 00:19:44.967 } 00:19:44.967 } 00:19:44.967 ]' 00:19:44.967 13:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:44.967 13:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:44.967 13:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:45.226 13:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:45.226 13:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:45.226 13:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.226 13:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.226 13:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.484 13:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:02:YjFmNjdjYjNmOWJiMmE4NzMyOTVmZTcyNWU0M2NjNWFjMDI0YTVmNjBjYjA3ODJlRYRf4g==: 00:19:46.048 13:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.048 13:37:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:46.048 13:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.048 13:37:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.048 13:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.048 13:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:46.048 13:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:46.048 13:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:46.306 13:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:19:46.306 13:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:46.306 13:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:46.306 13:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:46.306 13:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:46.306 13:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key3 00:19:46.306 13:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.306 13:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.306 13:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.306 13:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.306 13:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.880 00:19:46.880 13:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:46.880 13:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.880 13:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:47.147 13:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.147 13:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.147 13:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.147 13:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.147 13:37:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.147 13:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:47.147 { 00:19:47.147 "cntlid": 135, 00:19:47.147 "qid": 0, 00:19:47.147 "state": "enabled", 00:19:47.147 "listen_address": { 00:19:47.147 "trtype": "TCP", 00:19:47.147 "adrfam": "IPv4", 00:19:47.147 "traddr": "10.0.0.2", 00:19:47.147 "trsvcid": "4420" 00:19:47.147 }, 00:19:47.147 "peer_address": { 00:19:47.147 "trtype": "TCP", 00:19:47.147 "adrfam": "IPv4", 00:19:47.147 "traddr": "10.0.0.1", 00:19:47.147 "trsvcid": "36792" 00:19:47.147 }, 00:19:47.147 "auth": { 00:19:47.147 "state": "completed", 00:19:47.147 "digest": "sha512", 00:19:47.147 "dhgroup": "ffdhe6144" 00:19:47.147 } 00:19:47.147 } 00:19:47.147 ]' 00:19:47.147 13:37:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:47.147 13:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:47.147 13:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:47.147 13:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:47.147 13:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:47.147 13:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.147 13:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.147 13:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.417 13:38:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:03:MGEwNGIwMGY0ZjJjODQwZWI0MDg3M2ZkOTdjNzQxYWQ3NzM4MDVlYzc1ZDI5OGI1MTM1NzQxZTdmZTgzZGE0N0/ghaY=: 00:19:48.014 13:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.015 13:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:48.015 13:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.015 13:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.015 13:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.015 13:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.015 13:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:48.015 13:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:48.015 13:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:48.361 13:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:19:48.361 13:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:48.361 13:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:48.361 13:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:48.361 13:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:48.362 13:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key0 00:19:48.362 13:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.362 13:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.362 13:38:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.362 13:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:48.362 13:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:48.984 00:19:48.984 13:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:48.984 13:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:48.984 13:38:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.244 13:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.244 13:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.244 13:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.244 13:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.244 13:38:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.244 13:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:49.244 { 00:19:49.244 "cntlid": 137, 00:19:49.244 "qid": 0, 00:19:49.244 "state": "enabled", 00:19:49.244 "listen_address": { 00:19:49.244 "trtype": "TCP", 00:19:49.244 "adrfam": "IPv4", 00:19:49.244 "traddr": "10.0.0.2", 00:19:49.244 "trsvcid": "4420" 00:19:49.244 }, 00:19:49.244 "peer_address": { 00:19:49.244 "trtype": "TCP", 00:19:49.244 "adrfam": "IPv4", 00:19:49.244 "traddr": "10.0.0.1", 00:19:49.244 "trsvcid": "36832" 00:19:49.244 }, 00:19:49.244 "auth": { 00:19:49.244 "state": "completed", 00:19:49.244 "digest": "sha512", 00:19:49.244 "dhgroup": "ffdhe8192" 00:19:49.244 } 00:19:49.244 } 00:19:49.244 ]' 00:19:49.244 13:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:49.503 13:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:49.503 13:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:49.503 13:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:49.503 13:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:49.503 13:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.503 13:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.503 13:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.777 13:38:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:00:MWE3NzBjNTAwNzQ5ODVhMTk0OGY1MGY4YzkwNmFhMjc2ODNmMzcyZjBkM2VjMzY17CoG/w==: 00:19:50.709 13:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.709 13:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:50.709 13:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.709 13:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.709 13:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.709 13:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:50.709 13:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:50.709 13:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:50.709 13:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:19:50.709 13:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:50.709 13:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:50.709 13:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:50.709 13:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:50.709 13:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key1 00:19:50.709 13:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.709 13:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.709 13:38:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.709 13:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:50.709 13:38:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:51.642 00:19:51.642 13:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:51.642 13:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:51.642 13:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.901 13:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.901 13:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.901 13:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.901 13:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.901 13:38:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.901 13:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:51.901 { 00:19:51.901 "cntlid": 139, 00:19:51.901 "qid": 0, 00:19:51.901 "state": "enabled", 00:19:51.901 "listen_address": { 00:19:51.901 "trtype": "TCP", 00:19:51.901 "adrfam": "IPv4", 00:19:51.901 "traddr": "10.0.0.2", 00:19:51.901 "trsvcid": "4420" 00:19:51.901 }, 00:19:51.901 "peer_address": { 00:19:51.901 "trtype": "TCP", 00:19:51.901 "adrfam": "IPv4", 00:19:51.901 "traddr": "10.0.0.1", 00:19:51.901 "trsvcid": "36860" 00:19:51.901 }, 00:19:51.901 "auth": { 00:19:51.901 "state": "completed", 00:19:51.901 "digest": "sha512", 00:19:51.901 "dhgroup": "ffdhe8192" 00:19:51.901 } 00:19:51.901 } 00:19:51.901 ]' 00:19:51.901 13:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:51.901 13:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.901 13:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:51.901 13:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:51.901 13:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:51.901 13:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.901 13:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.901 13:38:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.158 13:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:01:ZTgwMGJlYWYxZTQxNzg0ZTBlZDg5YzM4ZjU3ZDE1MjSsT9JX: 00:19:53.104 13:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.104 13:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:53.104 13:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.104 13:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.104 13:38:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.104 13:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:53.104 13:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:53.104 13:38:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:53.363 13:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:19:53.363 13:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:53.363 13:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:53.363 13:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:53.363 13:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:53.363 13:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key2 00:19:53.363 13:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.363 13:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.363 13:38:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.363 13:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:53.363 13:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:53.929 00:19:53.929 13:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:53.929 13:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:53.929 13:38:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.215 13:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.215 13:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.215 13:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.215 13:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.215 13:38:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.215 13:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:54.215 { 00:19:54.215 "cntlid": 141, 00:19:54.215 "qid": 0, 00:19:54.215 "state": "enabled", 00:19:54.215 "listen_address": { 00:19:54.215 "trtype": "TCP", 00:19:54.215 "adrfam": "IPv4", 00:19:54.215 "traddr": "10.0.0.2", 00:19:54.215 "trsvcid": "4420" 00:19:54.215 }, 00:19:54.215 "peer_address": { 00:19:54.215 "trtype": "TCP", 00:19:54.215 "adrfam": "IPv4", 00:19:54.215 "traddr": "10.0.0.1", 00:19:54.215 "trsvcid": "54262" 00:19:54.215 }, 00:19:54.215 "auth": { 00:19:54.215 "state": "completed", 00:19:54.215 "digest": "sha512", 00:19:54.215 "dhgroup": "ffdhe8192" 00:19:54.215 } 00:19:54.215 } 00:19:54.215 ]' 00:19:54.215 13:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:54.215 13:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:54.215 13:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:54.472 13:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:54.472 13:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:54.472 13:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.472 13:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.472 13:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.742 13:38:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:02:YjFmNjdjYjNmOWJiMmE4NzMyOTVmZTcyNWU0M2NjNWFjMDI0YTVmNjBjYjA3ODJlRYRf4g==: 00:19:55.315 13:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.315 13:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:55.315 13:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.315 13:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.315 13:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.315 13:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:55.315 13:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:55.315 13:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:55.881 13:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:19:55.881 13:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:55.881 13:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:55.881 13:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:55.881 13:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:55.881 13:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key3 00:19:55.881 13:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.881 13:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.881 13:38:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.881 13:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.881 13:38:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.447 00:19:56.447 13:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:56.447 13:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:56.447 13:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.705 13:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.705 13:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.705 13:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.705 13:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.705 13:38:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.705 13:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:56.705 { 00:19:56.705 "cntlid": 143, 00:19:56.705 "qid": 0, 00:19:56.705 "state": "enabled", 00:19:56.705 "listen_address": { 00:19:56.705 "trtype": "TCP", 00:19:56.705 "adrfam": "IPv4", 00:19:56.705 "traddr": "10.0.0.2", 00:19:56.705 "trsvcid": "4420" 00:19:56.705 }, 00:19:56.705 "peer_address": { 00:19:56.705 "trtype": "TCP", 00:19:56.705 "adrfam": "IPv4", 00:19:56.705 "traddr": "10.0.0.1", 00:19:56.705 "trsvcid": "54294" 00:19:56.705 }, 00:19:56.705 "auth": { 00:19:56.705 "state": "completed", 00:19:56.705 "digest": "sha512", 00:19:56.705 "dhgroup": "ffdhe8192" 00:19:56.705 } 00:19:56.705 } 00:19:56.705 ]' 00:19:56.705 13:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:56.705 13:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.705 13:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:56.706 13:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:56.706 13:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:56.963 13:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.963 13:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.963 13:38:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.219 13:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:03:MGEwNGIwMGY0ZjJjODQwZWI0MDg3M2ZkOTdjNzQxYWQ3NzM4MDVlYzc1ZDI5OGI1MTM1NzQxZTdmZTgzZGE0N0/ghaY=: 00:19:57.784 13:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.784 13:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:57.784 13:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.784 13:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.784 13:38:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.784 13:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:19:57.784 13:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:19:57.784 13:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:19:57.784 13:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:57.784 13:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:57.784 13:38:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:58.042 13:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:19:58.042 13:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:58.042 13:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:58.042 13:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:58.042 13:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:58.042 13:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key0 00:19:58.042 13:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.042 13:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.042 13:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.042 13:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:58.042 13:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:58.608 00:19:58.868 13:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:58.869 13:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.869 13:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:59.127 13:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.127 13:38:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.127 13:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.127 13:38:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.127 13:38:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.127 13:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:59.127 { 00:19:59.127 "cntlid": 145, 00:19:59.127 "qid": 0, 00:19:59.127 "state": "enabled", 00:19:59.127 "listen_address": { 00:19:59.127 "trtype": "TCP", 00:19:59.127 "adrfam": "IPv4", 00:19:59.127 "traddr": "10.0.0.2", 00:19:59.127 "trsvcid": "4420" 00:19:59.127 }, 00:19:59.127 "peer_address": { 00:19:59.127 "trtype": "TCP", 00:19:59.127 "adrfam": "IPv4", 00:19:59.127 "traddr": "10.0.0.1", 00:19:59.127 "trsvcid": "54338" 00:19:59.127 }, 00:19:59.127 "auth": { 00:19:59.127 "state": "completed", 00:19:59.127 "digest": "sha512", 00:19:59.127 "dhgroup": "ffdhe8192" 00:19:59.127 } 00:19:59.127 } 00:19:59.127 ]' 00:19:59.127 13:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:59.127 13:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:59.127 13:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:59.127 13:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:59.127 13:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:59.127 13:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.127 13:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.127 13:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.384 13:38:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-secret DHHC-1:00:MWE3NzBjNTAwNzQ5ODVhMTk0OGY1MGY4YzkwNmFhMjc2ODNmMzcyZjBkM2VjMzY17CoG/w==: 00:19:59.951 13:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.951 13:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:19:59.951 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.951 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.209 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.209 13:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --dhchap-key key1 00:20:00.209 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.209 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.209 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.209 13:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:00.209 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:00.209 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:00.209 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:00.209 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.209 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:00.209 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.209 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:00.209 13:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:00.776 request: 00:20:00.776 { 00:20:00.776 "name": "nvme0", 00:20:00.776 "trtype": "tcp", 00:20:00.776 "traddr": "10.0.0.2", 00:20:00.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4", 00:20:00.776 "adrfam": "ipv4", 00:20:00.776 "trsvcid": "4420", 00:20:00.776 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:00.776 "dhchap_key": "key2", 00:20:00.776 "method": "bdev_nvme_attach_controller", 00:20:00.776 "req_id": 1 00:20:00.776 } 00:20:00.776 Got JSON-RPC error response 00:20:00.776 response: 00:20:00.776 { 00:20:00.776 "code": -32602, 00:20:00.776 "message": "Invalid parameters" 00:20:00.776 } 00:20:00.776 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:00.776 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:00.776 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:00.776 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:00.776 13:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:20:00.776 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.776 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.776 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.776 13:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:20:00.776 13:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:20:00.776 13:38:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 82247 00:20:00.776 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 82247 ']' 00:20:00.776 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 82247 00:20:00.776 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:20:00.776 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:00.776 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82247 00:20:00.776 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:00.776 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:00.776 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82247' 00:20:00.776 killing process with pid 82247 00:20:00.776 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 82247 00:20:00.776 13:38:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 82247 00:20:01.034 13:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:01.034 13:38:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:01.034 13:38:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:01.034 13:38:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:01.034 13:38:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:01.034 13:38:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:01.034 13:38:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:01.034 rmmod nvme_tcp 00:20:01.034 rmmod nvme_fabrics 00:20:01.034 13:38:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:01.292 13:38:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:01.292 13:38:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:01.292 13:38:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 82217 ']' 00:20:01.292 13:38:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 82217 00:20:01.292 13:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 82217 ']' 00:20:01.292 13:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 82217 00:20:01.292 13:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:20:01.292 13:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:01.292 13:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82217 00:20:01.292 13:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:01.292 13:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:01.292 killing process with pid 82217 00:20:01.292 13:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82217' 00:20:01.292 13:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 82217 00:20:01.292 13:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 82217 00:20:01.292 13:38:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:01.292 13:38:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:01.292 13:38:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:01.292 13:38:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:01.292 13:38:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:01.292 13:38:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.292 13:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:01.292 13:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.551 13:38:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:01.551 13:38:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.do7 /tmp/spdk.key-sha256.HGO /tmp/spdk.key-sha384.M3I /tmp/spdk.key-sha512.xW6 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:20:01.551 00:20:01.551 real 2m42.602s 00:20:01.551 user 6m23.551s 00:20:01.551 sys 0m31.867s 00:20:01.551 13:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:01.551 ************************************ 00:20:01.551 END TEST nvmf_auth_target 00:20:01.551 13:38:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.551 ************************************ 00:20:01.551 13:38:14 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:20:01.551 13:38:14 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:01.551 13:38:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:20:01.551 13:38:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:01.551 13:38:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:01.551 ************************************ 00:20:01.551 START TEST nvmf_bdevio_no_huge 00:20:01.551 ************************************ 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:01.551 * Looking for test storage... 00:20:01.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.551 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:01.552 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:01.823 Cannot find device "nvmf_tgt_br" 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:01.823 Cannot find device "nvmf_tgt_br2" 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:01.823 Cannot find device "nvmf_tgt_br" 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:01.823 Cannot find device "nvmf_tgt_br2" 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:01.823 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:01.823 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:01.823 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:02.081 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:02.081 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:02.081 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:02.081 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:02.081 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:02.081 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:02.081 13:38:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:02.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:02.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:20:02.081 00:20:02.081 --- 10.0.0.2 ping statistics --- 00:20:02.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.081 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:02.081 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:02.081 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:20:02.081 00:20:02.081 --- 10.0.0.3 ping statistics --- 00:20:02.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.081 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:02.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:02.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:02.081 00:20:02.081 --- 10.0.0.1 ping statistics --- 00:20:02.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.081 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=85404 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 85404 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 85404 ']' 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:02.081 13:38:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.081 [2024-05-15 13:38:15.111872] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:20:02.081 [2024-05-15 13:38:15.111950] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:02.339 [2024-05-15 13:38:15.263925] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:02.339 [2024-05-15 13:38:15.269023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:02.597 [2024-05-15 13:38:15.446338] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.597 [2024-05-15 13:38:15.446428] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.597 [2024-05-15 13:38:15.446445] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.597 [2024-05-15 13:38:15.446458] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.597 [2024-05-15 13:38:15.446469] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.597 [2024-05-15 13:38:15.446662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:02.597 [2024-05-15 13:38:15.446814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:02.597 [2024-05-15 13:38:15.447620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:02.597 [2024-05-15 13:38:15.447626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:03.163 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:03.163 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:03.164 [2024-05-15 13:38:16.176274] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:03.164 Malloc0 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:03.164 [2024-05-15 13:38:16.222835] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:03.164 [2024-05-15 13:38:16.223483] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.164 { 00:20:03.164 "params": { 00:20:03.164 "name": "Nvme$subsystem", 00:20:03.164 "trtype": "$TEST_TRANSPORT", 00:20:03.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.164 "adrfam": "ipv4", 00:20:03.164 "trsvcid": "$NVMF_PORT", 00:20:03.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.164 "hdgst": ${hdgst:-false}, 00:20:03.164 "ddgst": ${ddgst:-false} 00:20:03.164 }, 00:20:03.164 "method": "bdev_nvme_attach_controller" 00:20:03.164 } 00:20:03.164 EOF 00:20:03.164 )") 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:03.164 13:38:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:03.164 "params": { 00:20:03.164 "name": "Nvme1", 00:20:03.164 "trtype": "tcp", 00:20:03.164 "traddr": "10.0.0.2", 00:20:03.164 "adrfam": "ipv4", 00:20:03.164 "trsvcid": "4420", 00:20:03.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:03.164 "hdgst": false, 00:20:03.164 "ddgst": false 00:20:03.164 }, 00:20:03.164 "method": "bdev_nvme_attach_controller" 00:20:03.164 }' 00:20:03.423 [2024-05-15 13:38:16.291351] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:20:03.423 [2024-05-15 13:38:16.291453] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid85446 ] 00:20:03.423 [2024-05-15 13:38:16.438544] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:03.423 [2024-05-15 13:38:16.443390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:03.683 [2024-05-15 13:38:16.577814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.683 [2024-05-15 13:38:16.577911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.683 [2024-05-15 13:38:16.577920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.683 I/O targets: 00:20:03.683 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:03.683 00:20:03.683 00:20:03.683 CUnit - A unit testing framework for C - Version 2.1-3 00:20:03.683 http://cunit.sourceforge.net/ 00:20:03.683 00:20:03.683 00:20:03.683 Suite: bdevio tests on: Nvme1n1 00:20:03.683 Test: blockdev write read block ...passed 00:20:03.683 Test: blockdev write zeroes read block ...passed 00:20:03.941 Test: blockdev write zeroes read no split ...passed 00:20:03.941 Test: blockdev write zeroes read split ...passed 00:20:03.941 Test: blockdev write zeroes read split partial ...passed 00:20:03.941 Test: blockdev reset ...[2024-05-15 13:38:16.813436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:03.941 [2024-05-15 13:38:16.813596] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cba80 (9): Bad file descriptor 00:20:03.942 [2024-05-15 13:38:16.829752] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:03.942 passed 00:20:03.942 Test: blockdev write read 8 blocks ...passed 00:20:03.942 Test: blockdev write read size > 128k ...passed 00:20:03.942 Test: blockdev write read invalid size ...passed 00:20:03.942 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:03.942 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:03.942 Test: blockdev write read max offset ...passed 00:20:03.942 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:03.942 Test: blockdev writev readv 8 blocks ...passed 00:20:03.942 Test: blockdev writev readv 30 x 1block ...passed 00:20:03.942 Test: blockdev writev readv block ...passed 00:20:03.942 Test: blockdev writev readv size > 128k ...passed 00:20:03.942 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:03.942 Test: blockdev comparev and writev ...[2024-05-15 13:38:16.840664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:03.942 [2024-05-15 13:38:16.840734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:03.942 [2024-05-15 13:38:16.840760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:03.942 [2024-05-15 13:38:16.840777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:03.942 [2024-05-15 13:38:16.841361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:03.942 [2024-05-15 13:38:16.841396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:03.942 [2024-05-15 13:38:16.841424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:03.942 [2024-05-15 13:38:16.841444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:03.942 [2024-05-15 13:38:16.841952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:03.942 [2024-05-15 13:38:16.841990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:03.942 [2024-05-15 13:38:16.842018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:03.942 [2024-05-15 13:38:16.842045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:03.942 [2024-05-15 13:38:16.842713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:03.942 [2024-05-15 13:38:16.842747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:03.942 [2024-05-15 13:38:16.842769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:03.942 [2024-05-15 13:38:16.842785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:03.942 passed 00:20:03.942 Test: blockdev nvme passthru rw ...passed 00:20:03.942 Test: blockdev nvme passthru vendor specific ...[2024-05-15 13:38:16.843736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:03.942 [2024-05-15 13:38:16.843769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:03.942 [2024-05-15 13:38:16.843907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:03.942 [2024-05-15 13:38:16.843926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:03.942 [2024-05-15 13:38:16.844048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:03.942 [2024-05-15 13:38:16.844078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:03.942 passed 00:20:03.942 Test: blockdev nvme admin passthru ...[2024-05-15 13:38:16.844211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:03.942 [2024-05-15 13:38:16.844230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:03.942 passed 00:20:03.942 Test: blockdev copy ...passed 00:20:03.942 00:20:03.942 Run Summary: Type Total Ran Passed Failed Inactive 00:20:03.942 suites 1 1 n/a 0 0 00:20:03.942 tests 23 23 23 0 0 00:20:03.942 asserts 152 152 152 0 n/a 00:20:03.942 00:20:03.942 Elapsed time = 0.183 seconds 00:20:04.508 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:04.508 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.508 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:04.509 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.509 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:04.509 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:04.509 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:04.509 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:04.509 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:04.509 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:04.509 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:04.509 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:04.509 rmmod nvme_tcp 00:20:04.509 rmmod nvme_fabrics 00:20:04.509 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:04.509 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:04.509 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:04.509 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 85404 ']' 00:20:04.509 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 85404 00:20:04.509 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 85404 ']' 00:20:04.509 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 85404 00:20:04.509 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:20:04.509 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:04.509 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85404 00:20:04.509 killing process with pid 85404 00:20:04.509 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:20:04.509 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:20:04.509 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85404' 00:20:04.509 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 85404 00:20:04.509 [2024-05-15 13:38:17.428654] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:04.509 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 85404 00:20:04.767 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:04.767 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:04.767 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:04.767 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:04.767 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:04.767 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.767 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.767 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.025 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:05.025 00:20:05.025 real 0m3.383s 00:20:05.025 user 0m10.760s 00:20:05.025 sys 0m1.631s 00:20:05.025 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:05.025 13:38:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:05.025 ************************************ 00:20:05.025 END TEST nvmf_bdevio_no_huge 00:20:05.025 ************************************ 00:20:05.025 13:38:17 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:05.025 13:38:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:05.025 13:38:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:05.025 13:38:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:05.025 ************************************ 00:20:05.025 START TEST nvmf_tls 00:20:05.025 ************************************ 00:20:05.025 13:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:05.025 * Looking for test storage... 00:20:05.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:05.025 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:05.026 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:05.026 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:05.026 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:05.026 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:05.026 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:05.026 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:05.026 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:05.026 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:05.026 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:05.026 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:05.026 Cannot find device "nvmf_tgt_br" 00:20:05.026 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:20:05.026 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:05.026 Cannot find device "nvmf_tgt_br2" 00:20:05.026 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:20:05.026 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:05.026 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:05.026 Cannot find device "nvmf_tgt_br" 00:20:05.026 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:20:05.026 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:05.026 Cannot find device "nvmf_tgt_br2" 00:20:05.026 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:20:05.026 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:05.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:05.327 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:05.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:05.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:20:05.327 00:20:05.327 --- 10.0.0.2 ping statistics --- 00:20:05.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.327 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:05.327 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:05.327 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:20:05.327 00:20:05.327 --- 10.0.0.3 ping statistics --- 00:20:05.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.327 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:05.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:05.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:20:05.327 00:20:05.327 --- 10.0.0.1 ping statistics --- 00:20:05.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.327 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:05.327 13:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:05.328 13:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.328 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85622 00:20:05.328 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:05.328 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85622 00:20:05.328 13:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 85622 ']' 00:20:05.328 13:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.328 13:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:05.328 13:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.328 13:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:05.328 13:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.585 [2024-05-15 13:38:18.464407] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:20:05.585 [2024-05-15 13:38:18.464524] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.585 [2024-05-15 13:38:18.606966] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:05.585 [2024-05-15 13:38:18.627482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.585 [2024-05-15 13:38:18.682181] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.585 [2024-05-15 13:38:18.682260] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.585 [2024-05-15 13:38:18.682276] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:05.585 [2024-05-15 13:38:18.682290] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:05.585 [2024-05-15 13:38:18.682301] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.585 [2024-05-15 13:38:18.682336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.843 13:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:05.843 13:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:05.843 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:05.843 13:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:05.843 13:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.843 13:38:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.843 13:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:05.843 13:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:06.101 true 00:20:06.101 13:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:06.101 13:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:06.359 13:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:06.359 13:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:06.359 13:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:06.617 13:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:06.617 13:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:06.875 13:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:06.875 13:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:06.875 13:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:07.132 13:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:07.132 13:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:07.390 13:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:07.390 13:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:07.390 13:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:07.390 13:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:07.647 13:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:07.647 13:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:07.647 13:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:07.906 13:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:07.906 13:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:08.164 13:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:08.164 13:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:08.164 13:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:08.421 13:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:08.421 13:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.C9S4ENbAmd 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.i76rW5MDQJ 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.C9S4ENbAmd 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.i76rW5MDQJ 00:20:08.680 13:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:08.938 13:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:20:09.196 13:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.C9S4ENbAmd 00:20:09.196 13:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.C9S4ENbAmd 00:20:09.196 13:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:09.455 [2024-05-15 13:38:22.455926] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.455 13:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:09.713 13:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:09.971 [2024-05-15 13:38:23.048041] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:09.971 [2024-05-15 13:38:23.048155] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:09.971 [2024-05-15 13:38:23.048360] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.971 13:38:23 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:10.536 malloc0 00:20:10.536 13:38:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:10.536 13:38:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.C9S4ENbAmd 00:20:10.793 [2024-05-15 13:38:23.881343] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:11.051 13:38:23 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.C9S4ENbAmd 00:20:21.061 Initializing NVMe Controllers 00:20:21.061 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:21.061 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:21.061 Initialization complete. Launching workers. 00:20:21.061 ======================================================== 00:20:21.061 Latency(us) 00:20:21.061 Device Information : IOPS MiB/s Average min max 00:20:21.061 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12343.37 48.22 5185.74 1384.57 12333.28 00:20:21.061 ======================================================== 00:20:21.061 Total : 12343.37 48.22 5185.74 1384.57 12333.28 00:20:21.061 00:20:21.061 13:38:34 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.C9S4ENbAmd 00:20:21.061 13:38:34 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:21.061 13:38:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:21.061 13:38:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:21.061 13:38:34 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.C9S4ENbAmd' 00:20:21.061 13:38:34 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:21.061 13:38:34 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85844 00:20:21.061 13:38:34 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:21.061 13:38:34 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85844 /var/tmp/bdevperf.sock 00:20:21.061 13:38:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 85844 ']' 00:20:21.061 13:38:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.061 13:38:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:21.061 13:38:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.061 13:38:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:21.061 13:38:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.061 13:38:34 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:21.319 [2024-05-15 13:38:34.163198] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:20:21.319 [2024-05-15 13:38:34.163329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85844 ] 00:20:21.319 [2024-05-15 13:38:34.293865] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:21.319 [2024-05-15 13:38:34.308992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.319 [2024-05-15 13:38:34.365559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.252 13:38:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:22.252 13:38:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:22.252 13:38:35 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.C9S4ENbAmd 00:20:22.252 [2024-05-15 13:38:35.243954] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:22.252 [2024-05-15 13:38:35.244220] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:22.252 TLSTESTn1 00:20:22.252 13:38:35 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:22.510 Running I/O for 10 seconds... 00:20:32.472 00:20:32.472 Latency(us) 00:20:32.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.472 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:32.472 Verification LBA range: start 0x0 length 0x2000 00:20:32.472 TLSTESTn1 : 10.01 5025.99 19.63 0.00 0.00 25423.39 5118.05 24591.60 00:20:32.472 =================================================================================================================== 00:20:32.472 Total : 5025.99 19.63 0.00 0.00 25423.39 5118.05 24591.60 00:20:32.472 0 00:20:32.472 13:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:32.472 13:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 85844 00:20:32.472 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 85844 ']' 00:20:32.472 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 85844 00:20:32.472 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:32.472 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:32.472 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85844 00:20:32.472 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:32.472 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:32.472 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85844' 00:20:32.472 killing process with pid 85844 00:20:32.472 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 85844 00:20:32.472 Received shutdown signal, test time was about 10.000000 seconds 00:20:32.472 00:20:32.472 Latency(us) 00:20:32.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.472 =================================================================================================================== 00:20:32.472 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:32.472 [2024-05-15 13:38:45.487656] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 85844 00:20:32.472 scheduled for removal in v24.09 hit 1 times 00:20:32.731 13:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.i76rW5MDQJ 00:20:32.731 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:32.731 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.i76rW5MDQJ 00:20:32.731 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:32.731 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:32.731 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:32.731 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:32.731 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.i76rW5MDQJ 00:20:32.731 13:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:32.731 13:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:32.731 13:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:32.731 13:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.i76rW5MDQJ' 00:20:32.731 13:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:32.731 13:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:32.731 13:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85976 00:20:32.731 13:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:32.731 13:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85976 /var/tmp/bdevperf.sock 00:20:32.731 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 85976 ']' 00:20:32.731 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:32.731 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:32.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:32.731 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:32.731 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:32.731 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.731 [2024-05-15 13:38:45.718639] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:20:32.731 [2024-05-15 13:38:45.719217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85976 ] 00:20:32.989 [2024-05-15 13:38:45.840047] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:32.990 [2024-05-15 13:38:45.856535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.990 [2024-05-15 13:38:45.906021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.990 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:32.990 13:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:32.990 13:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.i76rW5MDQJ 00:20:33.248 [2024-05-15 13:38:46.233845] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:33.248 [2024-05-15 13:38:46.234234] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:33.248 [2024-05-15 13:38:46.244765] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:33.248 [2024-05-15 13:38:46.244999] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x665b50 (107): Transport endpoint is not connected 00:20:33.248 [2024-05-15 13:38:46.245959] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x665b50 (9): Bad file descriptor 00:20:33.248 [2024-05-15 13:38:46.246954] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:33.248 [2024-05-15 13:38:46.247109] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:33.248 [2024-05-15 13:38:46.247251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:33.248 request: 00:20:33.248 { 00:20:33.248 "name": "TLSTEST", 00:20:33.248 "trtype": "tcp", 00:20:33.248 "traddr": "10.0.0.2", 00:20:33.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:33.248 "adrfam": "ipv4", 00:20:33.248 "trsvcid": "4420", 00:20:33.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.248 "psk": "/tmp/tmp.i76rW5MDQJ", 00:20:33.248 "method": "bdev_nvme_attach_controller", 00:20:33.248 "req_id": 1 00:20:33.248 } 00:20:33.248 Got JSON-RPC error response 00:20:33.248 response: 00:20:33.249 { 00:20:33.249 "code": -32602, 00:20:33.249 "message": "Invalid parameters" 00:20:33.249 } 00:20:33.249 13:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 85976 00:20:33.249 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 85976 ']' 00:20:33.249 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 85976 00:20:33.249 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:33.249 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:33.249 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85976 00:20:33.249 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:33.249 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:33.249 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85976' 00:20:33.249 killing process with pid 85976 00:20:33.249 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 85976 00:20:33.249 Received shutdown signal, test time was about 10.000000 seconds 00:20:33.249 00:20:33.249 Latency(us) 00:20:33.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.249 =================================================================================================================== 00:20:33.249 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:33.249 [2024-05-15 13:38:46.295499] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:33.249 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 85976 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.C9S4ENbAmd 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.C9S4ENbAmd 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.C9S4ENbAmd 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.C9S4ENbAmd' 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=85992 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 85992 /var/tmp/bdevperf.sock 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 85992 ']' 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:33.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:33.507 13:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.507 [2024-05-15 13:38:46.526109] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:20:33.507 [2024-05-15 13:38:46.526202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85992 ] 00:20:33.765 [2024-05-15 13:38:46.649828] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:33.765 [2024-05-15 13:38:46.669530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.765 [2024-05-15 13:38:46.749751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.700 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:34.700 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:34.700 13:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.C9S4ENbAmd 00:20:34.700 [2024-05-15 13:38:47.708632] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:34.700 [2024-05-15 13:38:47.708773] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:34.700 [2024-05-15 13:38:47.714841] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:34.700 [2024-05-15 13:38:47.714882] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:34.700 [2024-05-15 13:38:47.714934] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:34.700 [2024-05-15 13:38:47.715228] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59ab50 (107): Transport endpoint is not connected 00:20:34.700 [2024-05-15 13:38:47.716216] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59ab50 (9): Bad file descriptor 00:20:34.700 [2024-05-15 13:38:47.717215] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:34.700 [2024-05-15 13:38:47.717244] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:34.700 [2024-05-15 13:38:47.717258] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:34.700 request: 00:20:34.700 { 00:20:34.700 "name": "TLSTEST", 00:20:34.700 "trtype": "tcp", 00:20:34.700 "traddr": "10.0.0.2", 00:20:34.700 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:34.700 "adrfam": "ipv4", 00:20:34.700 "trsvcid": "4420", 00:20:34.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.700 "psk": "/tmp/tmp.C9S4ENbAmd", 00:20:34.700 "method": "bdev_nvme_attach_controller", 00:20:34.700 "req_id": 1 00:20:34.700 } 00:20:34.700 Got JSON-RPC error response 00:20:34.700 response: 00:20:34.700 { 00:20:34.700 "code": -32602, 00:20:34.700 "message": "Invalid parameters" 00:20:34.700 } 00:20:34.700 13:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 85992 00:20:34.700 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 85992 ']' 00:20:34.700 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 85992 00:20:34.700 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:34.700 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:34.700 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85992 00:20:34.700 killing process with pid 85992 00:20:34.700 Received shutdown signal, test time was about 10.000000 seconds 00:20:34.700 00:20:34.700 Latency(us) 00:20:34.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.700 =================================================================================================================== 00:20:34.700 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:34.700 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:34.700 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:34.700 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85992' 00:20:34.700 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 85992 00:20:34.700 [2024-05-15 13:38:47.771008] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:34.700 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 85992 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.C9S4ENbAmd 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.C9S4ENbAmd 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.C9S4ENbAmd 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.C9S4ENbAmd' 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=86024 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 86024 /var/tmp/bdevperf.sock 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 86024 ']' 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:34.958 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.959 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:34.959 13:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.959 [2024-05-15 13:38:48.016830] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:20:34.959 [2024-05-15 13:38:48.016930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86024 ] 00:20:35.242 [2024-05-15 13:38:48.144845] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:35.242 [2024-05-15 13:38:48.163465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.242 [2024-05-15 13:38:48.209478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.242 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:35.242 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:35.242 13:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.C9S4ENbAmd 00:20:35.500 [2024-05-15 13:38:48.585194] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.500 [2024-05-15 13:38:48.585342] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:35.500 [2024-05-15 13:38:48.594982] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:35.500 [2024-05-15 13:38:48.595028] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:35.500 [2024-05-15 13:38:48.595081] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:35.500 [2024-05-15 13:38:48.595745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237ab50 (107): Transport endpoint is not connected 00:20:35.500 [2024-05-15 13:38:48.596732] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237ab50 (9): Bad file descriptor 00:20:35.500 [2024-05-15 13:38:48.597730] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:35.500 [2024-05-15 13:38:48.597756] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:35.500 [2024-05-15 13:38:48.597770] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:35.758 request: 00:20:35.758 { 00:20:35.758 "name": "TLSTEST", 00:20:35.758 "trtype": "tcp", 00:20:35.758 "traddr": "10.0.0.2", 00:20:35.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:35.758 "adrfam": "ipv4", 00:20:35.758 "trsvcid": "4420", 00:20:35.758 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:35.758 "psk": "/tmp/tmp.C9S4ENbAmd", 00:20:35.758 "method": "bdev_nvme_attach_controller", 00:20:35.758 "req_id": 1 00:20:35.758 } 00:20:35.758 Got JSON-RPC error response 00:20:35.758 response: 00:20:35.758 { 00:20:35.758 "code": -32602, 00:20:35.758 "message": "Invalid parameters" 00:20:35.758 } 00:20:35.758 13:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 86024 00:20:35.758 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 86024 ']' 00:20:35.758 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 86024 00:20:35.758 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:35.758 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:35.758 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86024 00:20:35.758 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:35.758 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:35.758 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86024' 00:20:35.758 killing process with pid 86024 00:20:35.758 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 86024 00:20:35.758 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.758 00:20:35.758 Latency(us) 00:20:35.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.758 =================================================================================================================== 00:20:35.758 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:35.758 [2024-05-15 13:38:48.652087] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:35.758 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 86024 00:20:35.758 13:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:35.758 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:35.758 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:35.758 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:35.758 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:35.759 13:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:35.759 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:35.759 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:35.759 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:35.759 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.759 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:35.759 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.759 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:35.759 13:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:35.759 13:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:35.759 13:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:35.759 13:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:35.759 13:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:35.759 13:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=86040 00:20:35.759 13:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:35.759 13:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:35.759 13:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 86040 /var/tmp/bdevperf.sock 00:20:35.759 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 86040 ']' 00:20:35.759 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.759 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:35.759 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.759 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:35.759 13:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.016 [2024-05-15 13:38:48.883980] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:20:36.016 [2024-05-15 13:38:48.884069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86040 ] 00:20:36.016 [2024-05-15 13:38:49.006840] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:36.016 [2024-05-15 13:38:49.021867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.016 [2024-05-15 13:38:49.078684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.275 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:36.275 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:36.275 13:38:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:36.534 [2024-05-15 13:38:49.505107] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:36.534 [2024-05-15 13:38:49.506858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb33910 (9): Bad file descriptor 00:20:36.534 [2024-05-15 13:38:49.507853] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:36.534 [2024-05-15 13:38:49.507876] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:36.534 [2024-05-15 13:38:49.507891] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:36.534 request: 00:20:36.534 { 00:20:36.534 "name": "TLSTEST", 00:20:36.534 "trtype": "tcp", 00:20:36.534 "traddr": "10.0.0.2", 00:20:36.534 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.534 "adrfam": "ipv4", 00:20:36.534 "trsvcid": "4420", 00:20:36.534 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.534 "method": "bdev_nvme_attach_controller", 00:20:36.534 "req_id": 1 00:20:36.534 } 00:20:36.534 Got JSON-RPC error response 00:20:36.534 response: 00:20:36.534 { 00:20:36.534 "code": -32602, 00:20:36.534 "message": "Invalid parameters" 00:20:36.534 } 00:20:36.534 13:38:49 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 86040 00:20:36.534 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 86040 ']' 00:20:36.534 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 86040 00:20:36.534 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:36.534 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:36.534 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86040 00:20:36.534 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:36.534 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:36.535 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86040' 00:20:36.535 killing process with pid 86040 00:20:36.535 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 86040 00:20:36.535 Received shutdown signal, test time was about 10.000000 seconds 00:20:36.535 00:20:36.535 Latency(us) 00:20:36.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.535 =================================================================================================================== 00:20:36.535 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:36.535 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 86040 00:20:36.792 13:38:49 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:36.792 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:36.792 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:36.792 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:36.792 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:36.792 13:38:49 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 85622 00:20:36.792 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 85622 ']' 00:20:36.792 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 85622 00:20:36.792 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:36.792 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:36.792 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85622 00:20:36.792 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:36.792 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:36.792 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85622' 00:20:36.792 killing process with pid 85622 00:20:36.792 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 85622 00:20:36.792 [2024-05-15 13:38:49.772034] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]addres 13:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 85622 00:20:36.792 s.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:36.792 [2024-05-15 13:38:49.772334] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:37.050 13:38:49 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:37.050 13:38:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:37.050 13:38:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:37.050 13:38:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:37.050 13:38:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:37.050 13:38:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:37.050 13:38:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:37.050 13:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:37.050 13:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:37.050 13:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.tde0PGHzXH 00:20:37.050 13:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:37.050 13:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.tde0PGHzXH 00:20:37.050 13:38:50 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:37.050 13:38:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:37.050 13:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:37.050 13:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.050 13:38:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=86070 00:20:37.050 13:38:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:37.050 13:38:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 86070 00:20:37.050 13:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 86070 ']' 00:20:37.050 13:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.050 13:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:37.050 13:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.050 13:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:37.050 13:38:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.050 [2024-05-15 13:38:50.105703] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:20:37.050 [2024-05-15 13:38:50.106530] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.308 [2024-05-15 13:38:50.238647] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:37.308 [2024-05-15 13:38:50.258670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.308 [2024-05-15 13:38:50.322550] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.308 [2024-05-15 13:38:50.322827] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.308 [2024-05-15 13:38:50.322981] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.308 [2024-05-15 13:38:50.323118] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.308 [2024-05-15 13:38:50.323161] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.308 [2024-05-15 13:38:50.323292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.284 13:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:38.284 13:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:38.284 13:38:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:38.284 13:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:38.284 13:38:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.284 13:38:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.284 13:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.tde0PGHzXH 00:20:38.284 13:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.tde0PGHzXH 00:20:38.284 13:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:38.542 [2024-05-15 13:38:51.400080] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.542 13:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:38.800 13:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:39.058 [2024-05-15 13:38:51.904140] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:39.058 [2024-05-15 13:38:51.904455] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:39.058 [2024-05-15 13:38:51.904779] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.058 13:38:51 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:39.316 malloc0 00:20:39.316 13:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:39.574 13:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tde0PGHzXH 00:20:39.832 [2024-05-15 13:38:52.721796] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:39.832 13:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tde0PGHzXH 00:20:39.832 13:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:39.832 13:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:39.832 13:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:39.832 13:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tde0PGHzXH' 00:20:39.832 13:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:39.832 13:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=86124 00:20:39.832 13:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:39.832 13:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 86124 /var/tmp/bdevperf.sock 00:20:39.833 13:38:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:39.833 13:38:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 86124 ']' 00:20:39.833 13:38:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.833 13:38:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:39.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.833 13:38:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.833 13:38:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:39.833 13:38:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.833 [2024-05-15 13:38:52.793753] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:20:39.833 [2024-05-15 13:38:52.794112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86124 ] 00:20:39.833 [2024-05-15 13:38:52.921918] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:40.091 [2024-05-15 13:38:52.932492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.091 [2024-05-15 13:38:52.984533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.655 13:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:40.655 13:38:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:40.655 13:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tde0PGHzXH 00:20:40.912 [2024-05-15 13:38:53.880213] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:40.912 [2024-05-15 13:38:53.881078] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:40.912 TLSTESTn1 00:20:40.912 13:38:53 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:41.171 Running I/O for 10 seconds... 00:20:51.143 00:20:51.143 Latency(us) 00:20:51.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.143 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:51.143 Verification LBA range: start 0x0 length 0x2000 00:20:51.143 TLSTESTn1 : 10.01 5281.00 20.63 0.00 0.00 24195.42 5336.50 20971.52 00:20:51.143 =================================================================================================================== 00:20:51.143 Total : 5281.00 20.63 0.00 0.00 24195.42 5336.50 20971.52 00:20:51.143 0 00:20:51.143 13:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:51.143 13:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 86124 00:20:51.143 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 86124 ']' 00:20:51.143 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 86124 00:20:51.143 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:51.144 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:51.144 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86124 00:20:51.144 killing process with pid 86124 00:20:51.144 Received shutdown signal, test time was about 10.000000 seconds 00:20:51.144 00:20:51.144 Latency(us) 00:20:51.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.144 =================================================================================================================== 00:20:51.144 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:51.144 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:51.144 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:51.144 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86124' 00:20:51.144 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 86124 00:20:51.144 [2024-05-15 13:39:04.146193] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:51.144 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 86124 00:20:51.402 13:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.tde0PGHzXH 00:20:51.403 13:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tde0PGHzXH 00:20:51.403 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:51.403 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tde0PGHzXH 00:20:51.403 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:51.403 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.403 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:51.403 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.403 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tde0PGHzXH 00:20:51.403 13:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:51.403 13:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:51.403 13:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:51.403 13:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tde0PGHzXH' 00:20:51.403 13:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:51.403 13:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=86259 00:20:51.403 13:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:51.403 13:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:51.403 13:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 86259 /var/tmp/bdevperf.sock 00:20:51.403 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 86259 ']' 00:20:51.403 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.403 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:51.403 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.403 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:51.403 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.403 [2024-05-15 13:39:04.401581] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:20:51.403 [2024-05-15 13:39:04.402152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86259 ] 00:20:51.661 [2024-05-15 13:39:04.539913] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:51.661 [2024-05-15 13:39:04.560972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.661 [2024-05-15 13:39:04.611706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.661 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:51.661 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:51.661 13:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tde0PGHzXH 00:20:51.919 [2024-05-15 13:39:04.938708] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:51.919 [2024-05-15 13:39:04.939123] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:51.919 [2024-05-15 13:39:04.939346] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.tde0PGHzXH 00:20:51.919 request: 00:20:51.919 { 00:20:51.919 "name": "TLSTEST", 00:20:51.919 "trtype": "tcp", 00:20:51.919 "traddr": "10.0.0.2", 00:20:51.919 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:51.919 "adrfam": "ipv4", 00:20:51.919 "trsvcid": "4420", 00:20:51.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.919 "psk": "/tmp/tmp.tde0PGHzXH", 00:20:51.919 "method": "bdev_nvme_attach_controller", 00:20:51.919 "req_id": 1 00:20:51.919 } 00:20:51.919 Got JSON-RPC error response 00:20:51.919 response: 00:20:51.919 { 00:20:51.919 "code": -1, 00:20:51.919 "message": "Operation not permitted" 00:20:51.919 } 00:20:51.919 13:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 86259 00:20:51.919 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 86259 ']' 00:20:51.919 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 86259 00:20:51.919 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:51.919 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:51.919 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86259 00:20:51.919 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:51.919 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:51.919 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86259' 00:20:51.919 killing process with pid 86259 00:20:51.919 Received shutdown signal, test time was about 10.000000 seconds 00:20:51.919 00:20:51.919 Latency(us) 00:20:51.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.919 =================================================================================================================== 00:20:51.919 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:51.919 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 86259 00:20:51.919 13:39:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 86259 00:20:52.177 13:39:05 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:52.177 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:52.177 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:52.177 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:52.177 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:52.177 13:39:05 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 86070 00:20:52.177 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 86070 ']' 00:20:52.177 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 86070 00:20:52.177 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:52.177 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:52.177 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86070 00:20:52.177 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:52.177 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:52.177 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86070' 00:20:52.177 killing process with pid 86070 00:20:52.177 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 86070 00:20:52.177 [2024-05-15 13:39:05.199005] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:52.177 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 86070 00:20:52.177 [2024-05-15 13:39:05.199254] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:52.435 13:39:05 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:52.435 13:39:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:52.435 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:52.435 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.435 13:39:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=86284 00:20:52.435 13:39:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:52.435 13:39:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 86284 00:20:52.435 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 86284 ']' 00:20:52.435 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.435 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:52.435 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.435 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:52.435 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.435 [2024-05-15 13:39:05.454753] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:20:52.435 [2024-05-15 13:39:05.455058] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.694 [2024-05-15 13:39:05.583193] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:52.694 [2024-05-15 13:39:05.597904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.694 [2024-05-15 13:39:05.653700] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.694 [2024-05-15 13:39:05.653937] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.694 [2024-05-15 13:39:05.654089] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.694 [2024-05-15 13:39:05.654248] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.694 [2024-05-15 13:39:05.654331] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.694 [2024-05-15 13:39:05.654458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.694 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:52.694 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:52.694 13:39:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:52.694 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:52.694 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.952 13:39:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.952 13:39:05 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.tde0PGHzXH 00:20:52.952 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:52.952 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.tde0PGHzXH 00:20:52.952 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:20:52.952 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:52.952 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:20:52.952 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:52.952 13:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.tde0PGHzXH 00:20:52.952 13:39:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.tde0PGHzXH 00:20:52.952 13:39:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:52.952 [2024-05-15 13:39:06.005165] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.952 13:39:06 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:53.210 13:39:06 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:53.468 [2024-05-15 13:39:06.437218] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:53.468 [2024-05-15 13:39:06.437577] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:53.468 [2024-05-15 13:39:06.437878] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.468 13:39:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:53.727 malloc0 00:20:53.727 13:39:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:53.985 13:39:07 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tde0PGHzXH 00:20:54.243 [2024-05-15 13:39:07.326919] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:54.243 [2024-05-15 13:39:07.327139] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:54.243 [2024-05-15 13:39:07.327274] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:54.243 request: 00:20:54.243 { 00:20:54.243 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:54.243 "host": "nqn.2016-06.io.spdk:host1", 00:20:54.243 "psk": "/tmp/tmp.tde0PGHzXH", 00:20:54.243 "method": "nvmf_subsystem_add_host", 00:20:54.243 "req_id": 1 00:20:54.243 } 00:20:54.243 Got JSON-RPC error response 00:20:54.243 response: 00:20:54.243 { 00:20:54.243 "code": -32603, 00:20:54.243 "message": "Internal error" 00:20:54.243 } 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 86284 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 86284 ']' 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 86284 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86284 00:20:54.502 killing process with pid 86284 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86284' 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 86284 00:20:54.502 [2024-05-15 13:39:07.382000] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 86284 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.tde0PGHzXH 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=86339 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 86339 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 86339 ']' 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:54.502 13:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.761 [2024-05-15 13:39:07.650770] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:20:54.761 [2024-05-15 13:39:07.651210] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.761 [2024-05-15 13:39:07.783656] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:54.761 [2024-05-15 13:39:07.802815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.761 [2024-05-15 13:39:07.853527] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.761 [2024-05-15 13:39:07.853770] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.761 [2024-05-15 13:39:07.853908] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.761 [2024-05-15 13:39:07.853979] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.761 [2024-05-15 13:39:07.854157] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.761 [2024-05-15 13:39:07.854256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.696 13:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:55.696 13:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:55.696 13:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:55.696 13:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:55.696 13:39:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:55.696 13:39:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:55.696 13:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.tde0PGHzXH 00:20:55.696 13:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.tde0PGHzXH 00:20:55.696 13:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:55.954 [2024-05-15 13:39:08.972413] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.954 13:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:56.212 13:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:56.469 [2024-05-15 13:39:09.488510] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:56.469 [2024-05-15 13:39:09.488965] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:56.469 [2024-05-15 13:39:09.489402] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.469 13:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:56.727 malloc0 00:20:56.984 13:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:57.243 13:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tde0PGHzXH 00:20:57.502 [2024-05-15 13:39:10.347833] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:57.502 13:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=86394 00:20:57.502 13:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:57.502 13:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:57.502 13:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 86394 /var/tmp/bdevperf.sock 00:20:57.502 13:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 86394 ']' 00:20:57.502 13:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.502 13:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:57.502 13:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.502 13:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:57.502 13:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.502 [2024-05-15 13:39:10.427103] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:20:57.502 [2024-05-15 13:39:10.427486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86394 ] 00:20:57.502 [2024-05-15 13:39:10.562037] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:57.502 [2024-05-15 13:39:10.580887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.761 [2024-05-15 13:39:10.640867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.761 13:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:57.761 13:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:57.761 13:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tde0PGHzXH 00:20:58.019 [2024-05-15 13:39:10.975491] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:58.019 [2024-05-15 13:39:10.975990] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:58.019 TLSTESTn1 00:20:58.019 13:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:20:58.586 13:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:58.586 "subsystems": [ 00:20:58.586 { 00:20:58.586 "subsystem": "keyring", 00:20:58.586 "config": [] 00:20:58.586 }, 00:20:58.586 { 00:20:58.586 "subsystem": "iobuf", 00:20:58.586 "config": [ 00:20:58.586 { 00:20:58.586 "method": "iobuf_set_options", 00:20:58.586 "params": { 00:20:58.586 "small_pool_count": 8192, 00:20:58.586 "large_pool_count": 1024, 00:20:58.586 "small_bufsize": 8192, 00:20:58.586 "large_bufsize": 135168 00:20:58.586 } 00:20:58.586 } 00:20:58.586 ] 00:20:58.586 }, 00:20:58.586 { 00:20:58.586 "subsystem": "sock", 00:20:58.586 "config": [ 00:20:58.586 { 00:20:58.586 "method": "sock_impl_set_options", 00:20:58.586 "params": { 00:20:58.586 "impl_name": "uring", 00:20:58.586 "recv_buf_size": 2097152, 00:20:58.586 "send_buf_size": 2097152, 00:20:58.586 "enable_recv_pipe": true, 00:20:58.586 "enable_quickack": false, 00:20:58.586 "enable_placement_id": 0, 00:20:58.586 "enable_zerocopy_send_server": false, 00:20:58.586 "enable_zerocopy_send_client": false, 00:20:58.586 "zerocopy_threshold": 0, 00:20:58.586 "tls_version": 0, 00:20:58.586 "enable_ktls": false 00:20:58.586 } 00:20:58.586 }, 00:20:58.586 { 00:20:58.586 "method": "sock_impl_set_options", 00:20:58.586 "params": { 00:20:58.586 "impl_name": "posix", 00:20:58.586 "recv_buf_size": 2097152, 00:20:58.586 "send_buf_size": 2097152, 00:20:58.586 "enable_recv_pipe": true, 00:20:58.586 "enable_quickack": false, 00:20:58.586 "enable_placement_id": 0, 00:20:58.586 "enable_zerocopy_send_server": true, 00:20:58.586 "enable_zerocopy_send_client": false, 00:20:58.586 "zerocopy_threshold": 0, 00:20:58.587 "tls_version": 0, 00:20:58.587 "enable_ktls": false 00:20:58.587 } 00:20:58.587 }, 00:20:58.587 { 00:20:58.587 "method": "sock_impl_set_options", 00:20:58.587 "params": { 00:20:58.587 "impl_name": "ssl", 00:20:58.587 "recv_buf_size": 4096, 00:20:58.587 "send_buf_size": 4096, 00:20:58.587 "enable_recv_pipe": true, 00:20:58.587 "enable_quickack": false, 00:20:58.587 "enable_placement_id": 0, 00:20:58.587 "enable_zerocopy_send_server": true, 00:20:58.587 "enable_zerocopy_send_client": false, 00:20:58.587 "zerocopy_threshold": 0, 00:20:58.587 "tls_version": 0, 00:20:58.587 "enable_ktls": false 00:20:58.587 } 00:20:58.587 } 00:20:58.587 ] 00:20:58.587 }, 00:20:58.587 { 00:20:58.587 "subsystem": "vmd", 00:20:58.587 "config": [] 00:20:58.587 }, 00:20:58.587 { 00:20:58.587 "subsystem": "accel", 00:20:58.587 "config": [ 00:20:58.587 { 00:20:58.587 "method": "accel_set_options", 00:20:58.587 "params": { 00:20:58.587 "small_cache_size": 128, 00:20:58.587 "large_cache_size": 16, 00:20:58.587 "task_count": 2048, 00:20:58.587 "sequence_count": 2048, 00:20:58.587 "buf_count": 2048 00:20:58.587 } 00:20:58.587 } 00:20:58.587 ] 00:20:58.587 }, 00:20:58.587 { 00:20:58.587 "subsystem": "bdev", 00:20:58.587 "config": [ 00:20:58.587 { 00:20:58.587 "method": "bdev_set_options", 00:20:58.587 "params": { 00:20:58.587 "bdev_io_pool_size": 65535, 00:20:58.587 "bdev_io_cache_size": 256, 00:20:58.587 "bdev_auto_examine": true, 00:20:58.587 "iobuf_small_cache_size": 128, 00:20:58.587 "iobuf_large_cache_size": 16 00:20:58.587 } 00:20:58.587 }, 00:20:58.587 { 00:20:58.587 "method": "bdev_raid_set_options", 00:20:58.587 "params": { 00:20:58.587 "process_window_size_kb": 1024 00:20:58.587 } 00:20:58.587 }, 00:20:58.587 { 00:20:58.587 "method": "bdev_iscsi_set_options", 00:20:58.587 "params": { 00:20:58.587 "timeout_sec": 30 00:20:58.587 } 00:20:58.587 }, 00:20:58.587 { 00:20:58.587 "method": "bdev_nvme_set_options", 00:20:58.587 "params": { 00:20:58.587 "action_on_timeout": "none", 00:20:58.587 "timeout_us": 0, 00:20:58.587 "timeout_admin_us": 0, 00:20:58.587 "keep_alive_timeout_ms": 10000, 00:20:58.587 "arbitration_burst": 0, 00:20:58.587 "low_priority_weight": 0, 00:20:58.587 "medium_priority_weight": 0, 00:20:58.587 "high_priority_weight": 0, 00:20:58.587 "nvme_adminq_poll_period_us": 10000, 00:20:58.587 "nvme_ioq_poll_period_us": 0, 00:20:58.587 "io_queue_requests": 0, 00:20:58.587 "delay_cmd_submit": true, 00:20:58.587 "transport_retry_count": 4, 00:20:58.587 "bdev_retry_count": 3, 00:20:58.587 "transport_ack_timeout": 0, 00:20:58.587 "ctrlr_loss_timeout_sec": 0, 00:20:58.587 "reconnect_delay_sec": 0, 00:20:58.587 "fast_io_fail_timeout_sec": 0, 00:20:58.587 "disable_auto_failback": false, 00:20:58.587 "generate_uuids": false, 00:20:58.587 "transport_tos": 0, 00:20:58.587 "nvme_error_stat": false, 00:20:58.587 "rdma_srq_size": 0, 00:20:58.587 "io_path_stat": false, 00:20:58.587 "allow_accel_sequence": false, 00:20:58.587 "rdma_max_cq_size": 0, 00:20:58.587 "rdma_cm_event_timeout_ms": 0, 00:20:58.587 "dhchap_digests": [ 00:20:58.587 "sha256", 00:20:58.587 "sha384", 00:20:58.587 "sha512" 00:20:58.587 ], 00:20:58.587 "dhchap_dhgroups": [ 00:20:58.587 "null", 00:20:58.587 "ffdhe2048", 00:20:58.587 "ffdhe3072", 00:20:58.587 "ffdhe4096", 00:20:58.587 "ffdhe6144", 00:20:58.587 "ffdhe8192" 00:20:58.587 ] 00:20:58.587 } 00:20:58.587 }, 00:20:58.587 { 00:20:58.587 "method": "bdev_nvme_set_hotplug", 00:20:58.587 "params": { 00:20:58.587 "period_us": 100000, 00:20:58.587 "enable": false 00:20:58.587 } 00:20:58.587 }, 00:20:58.587 { 00:20:58.587 "method": "bdev_malloc_create", 00:20:58.587 "params": { 00:20:58.587 "name": "malloc0", 00:20:58.587 "num_blocks": 8192, 00:20:58.587 "block_size": 4096, 00:20:58.587 "physical_block_size": 4096, 00:20:58.587 "uuid": "0ad39c5c-b483-4742-b22e-d0d78fc10d06", 00:20:58.587 "optimal_io_boundary": 0 00:20:58.587 } 00:20:58.587 }, 00:20:58.587 { 00:20:58.587 "method": "bdev_wait_for_examine" 00:20:58.587 } 00:20:58.587 ] 00:20:58.587 }, 00:20:58.587 { 00:20:58.587 "subsystem": "nbd", 00:20:58.587 "config": [] 00:20:58.587 }, 00:20:58.587 { 00:20:58.587 "subsystem": "scheduler", 00:20:58.587 "config": [ 00:20:58.587 { 00:20:58.587 "method": "framework_set_scheduler", 00:20:58.587 "params": { 00:20:58.587 "name": "static" 00:20:58.587 } 00:20:58.587 } 00:20:58.587 ] 00:20:58.587 }, 00:20:58.587 { 00:20:58.587 "subsystem": "nvmf", 00:20:58.587 "config": [ 00:20:58.587 { 00:20:58.587 "method": "nvmf_set_config", 00:20:58.587 "params": { 00:20:58.587 "discovery_filter": "match_any", 00:20:58.587 "admin_cmd_passthru": { 00:20:58.587 "identify_ctrlr": false 00:20:58.587 } 00:20:58.587 } 00:20:58.587 }, 00:20:58.587 { 00:20:58.587 "method": "nvmf_set_max_subsystems", 00:20:58.587 "params": { 00:20:58.587 "max_subsystems": 1024 00:20:58.587 } 00:20:58.587 }, 00:20:58.587 { 00:20:58.587 "method": "nvmf_set_crdt", 00:20:58.587 "params": { 00:20:58.587 "crdt1": 0, 00:20:58.587 "crdt2": 0, 00:20:58.587 "crdt3": 0 00:20:58.587 } 00:20:58.587 }, 00:20:58.587 { 00:20:58.587 "method": "nvmf_create_transport", 00:20:58.587 "params": { 00:20:58.587 "trtype": "TCP", 00:20:58.587 "max_queue_depth": 128, 00:20:58.587 "max_io_qpairs_per_ctrlr": 127, 00:20:58.587 "in_capsule_data_size": 4096, 00:20:58.587 "max_io_size": 131072, 00:20:58.587 "io_unit_size": 131072, 00:20:58.587 "max_aq_depth": 128, 00:20:58.587 "num_shared_buffers": 511, 00:20:58.587 "buf_cache_size": 4294967295, 00:20:58.587 "dif_insert_or_strip": false, 00:20:58.587 "zcopy": false, 00:20:58.587 "c2h_success": false, 00:20:58.587 "sock_priority": 0, 00:20:58.587 "abort_timeout_sec": 1, 00:20:58.587 "ack_timeout": 0, 00:20:58.587 "data_wr_pool_size": 0 00:20:58.587 } 00:20:58.587 }, 00:20:58.587 { 00:20:58.587 "method": "nvmf_create_subsystem", 00:20:58.587 "params": { 00:20:58.587 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.587 "allow_any_host": false, 00:20:58.587 "serial_number": "SPDK00000000000001", 00:20:58.587 "model_number": "SPDK bdev Controller", 00:20:58.587 "max_namespaces": 10, 00:20:58.587 "min_cntlid": 1, 00:20:58.587 "max_cntlid": 65519, 00:20:58.587 "ana_reporting": false 00:20:58.587 } 00:20:58.587 }, 00:20:58.587 { 00:20:58.587 "method": "nvmf_subsystem_add_host", 00:20:58.587 "params": { 00:20:58.587 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.587 "host": "nqn.2016-06.io.spdk:host1", 00:20:58.587 "psk": "/tmp/tmp.tde0PGHzXH" 00:20:58.587 } 00:20:58.587 }, 00:20:58.587 { 00:20:58.587 "method": "nvmf_subsystem_add_ns", 00:20:58.587 "params": { 00:20:58.587 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.587 "namespace": { 00:20:58.587 "nsid": 1, 00:20:58.587 "bdev_name": "malloc0", 00:20:58.587 "nguid": "0AD39C5CB4834742B22ED0D78FC10D06", 00:20:58.587 "uuid": "0ad39c5c-b483-4742-b22e-d0d78fc10d06", 00:20:58.587 "no_auto_visible": false 00:20:58.587 } 00:20:58.587 } 00:20:58.587 }, 00:20:58.587 { 00:20:58.587 "method": "nvmf_subsystem_add_listener", 00:20:58.587 "params": { 00:20:58.587 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.588 "listen_address": { 00:20:58.588 "trtype": "TCP", 00:20:58.588 "adrfam": "IPv4", 00:20:58.588 "traddr": "10.0.0.2", 00:20:58.588 "trsvcid": "4420" 00:20:58.588 }, 00:20:58.588 "secure_channel": true 00:20:58.588 } 00:20:58.588 } 00:20:58.588 ] 00:20:58.588 } 00:20:58.588 ] 00:20:58.588 }' 00:20:58.588 13:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:58.846 13:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:58.846 "subsystems": [ 00:20:58.846 { 00:20:58.846 "subsystem": "keyring", 00:20:58.846 "config": [] 00:20:58.846 }, 00:20:58.846 { 00:20:58.846 "subsystem": "iobuf", 00:20:58.846 "config": [ 00:20:58.846 { 00:20:58.846 "method": "iobuf_set_options", 00:20:58.846 "params": { 00:20:58.846 "small_pool_count": 8192, 00:20:58.846 "large_pool_count": 1024, 00:20:58.846 "small_bufsize": 8192, 00:20:58.846 "large_bufsize": 135168 00:20:58.846 } 00:20:58.846 } 00:20:58.846 ] 00:20:58.846 }, 00:20:58.846 { 00:20:58.846 "subsystem": "sock", 00:20:58.846 "config": [ 00:20:58.846 { 00:20:58.846 "method": "sock_impl_set_options", 00:20:58.846 "params": { 00:20:58.846 "impl_name": "uring", 00:20:58.846 "recv_buf_size": 2097152, 00:20:58.847 "send_buf_size": 2097152, 00:20:58.847 "enable_recv_pipe": true, 00:20:58.847 "enable_quickack": false, 00:20:58.847 "enable_placement_id": 0, 00:20:58.847 "enable_zerocopy_send_server": false, 00:20:58.847 "enable_zerocopy_send_client": false, 00:20:58.847 "zerocopy_threshold": 0, 00:20:58.847 "tls_version": 0, 00:20:58.847 "enable_ktls": false 00:20:58.847 } 00:20:58.847 }, 00:20:58.847 { 00:20:58.847 "method": "sock_impl_set_options", 00:20:58.847 "params": { 00:20:58.847 "impl_name": "posix", 00:20:58.847 "recv_buf_size": 2097152, 00:20:58.847 "send_buf_size": 2097152, 00:20:58.847 "enable_recv_pipe": true, 00:20:58.847 "enable_quickack": false, 00:20:58.847 "enable_placement_id": 0, 00:20:58.847 "enable_zerocopy_send_server": true, 00:20:58.847 "enable_zerocopy_send_client": false, 00:20:58.847 "zerocopy_threshold": 0, 00:20:58.847 "tls_version": 0, 00:20:58.847 "enable_ktls": false 00:20:58.847 } 00:20:58.847 }, 00:20:58.847 { 00:20:58.847 "method": "sock_impl_set_options", 00:20:58.847 "params": { 00:20:58.847 "impl_name": "ssl", 00:20:58.847 "recv_buf_size": 4096, 00:20:58.847 "send_buf_size": 4096, 00:20:58.847 "enable_recv_pipe": true, 00:20:58.847 "enable_quickack": false, 00:20:58.847 "enable_placement_id": 0, 00:20:58.847 "enable_zerocopy_send_server": true, 00:20:58.847 "enable_zerocopy_send_client": false, 00:20:58.847 "zerocopy_threshold": 0, 00:20:58.847 "tls_version": 0, 00:20:58.847 "enable_ktls": false 00:20:58.847 } 00:20:58.847 } 00:20:58.847 ] 00:20:58.847 }, 00:20:58.847 { 00:20:58.847 "subsystem": "vmd", 00:20:58.847 "config": [] 00:20:58.847 }, 00:20:58.847 { 00:20:58.847 "subsystem": "accel", 00:20:58.847 "config": [ 00:20:58.847 { 00:20:58.847 "method": "accel_set_options", 00:20:58.847 "params": { 00:20:58.847 "small_cache_size": 128, 00:20:58.847 "large_cache_size": 16, 00:20:58.847 "task_count": 2048, 00:20:58.847 "sequence_count": 2048, 00:20:58.847 "buf_count": 2048 00:20:58.847 } 00:20:58.847 } 00:20:58.847 ] 00:20:58.847 }, 00:20:58.847 { 00:20:58.847 "subsystem": "bdev", 00:20:58.847 "config": [ 00:20:58.847 { 00:20:58.847 "method": "bdev_set_options", 00:20:58.847 "params": { 00:20:58.847 "bdev_io_pool_size": 65535, 00:20:58.847 "bdev_io_cache_size": 256, 00:20:58.847 "bdev_auto_examine": true, 00:20:58.847 "iobuf_small_cache_size": 128, 00:20:58.847 "iobuf_large_cache_size": 16 00:20:58.847 } 00:20:58.847 }, 00:20:58.847 { 00:20:58.847 "method": "bdev_raid_set_options", 00:20:58.847 "params": { 00:20:58.847 "process_window_size_kb": 1024 00:20:58.847 } 00:20:58.847 }, 00:20:58.847 { 00:20:58.847 "method": "bdev_iscsi_set_options", 00:20:58.847 "params": { 00:20:58.847 "timeout_sec": 30 00:20:58.847 } 00:20:58.847 }, 00:20:58.847 { 00:20:58.847 "method": "bdev_nvme_set_options", 00:20:58.847 "params": { 00:20:58.847 "action_on_timeout": "none", 00:20:58.847 "timeout_us": 0, 00:20:58.847 "timeout_admin_us": 0, 00:20:58.847 "keep_alive_timeout_ms": 10000, 00:20:58.847 "arbitration_burst": 0, 00:20:58.847 "low_priority_weight": 0, 00:20:58.847 "medium_priority_weight": 0, 00:20:58.847 "high_priority_weight": 0, 00:20:58.847 "nvme_adminq_poll_period_us": 10000, 00:20:58.847 "nvme_ioq_poll_period_us": 0, 00:20:58.847 "io_queue_requests": 512, 00:20:58.847 "delay_cmd_submit": true, 00:20:58.847 "transport_retry_count": 4, 00:20:58.847 "bdev_retry_count": 3, 00:20:58.847 "transport_ack_timeout": 0, 00:20:58.847 "ctrlr_loss_timeout_sec": 0, 00:20:58.847 "reconnect_delay_sec": 0, 00:20:58.847 "fast_io_fail_timeout_sec": 0, 00:20:58.847 "disable_auto_failback": false, 00:20:58.847 "generate_uuids": false, 00:20:58.847 "transport_tos": 0, 00:20:58.847 "nvme_error_stat": false, 00:20:58.847 "rdma_srq_size": 0, 00:20:58.847 "io_path_stat": false, 00:20:58.847 "allow_accel_sequence": false, 00:20:58.847 "rdma_max_cq_size": 0, 00:20:58.847 "rdma_cm_event_timeout_ms": 0, 00:20:58.847 "dhchap_digests": [ 00:20:58.847 "sha256", 00:20:58.847 "sha384", 00:20:58.847 "sha512" 00:20:58.847 ], 00:20:58.847 "dhchap_dhgroups": [ 00:20:58.847 "null", 00:20:58.847 "ffdhe2048", 00:20:58.847 "ffdhe3072", 00:20:58.847 "ffdhe4096", 00:20:58.847 "ffdhe6144", 00:20:58.847 "ffdhe8192" 00:20:58.847 ] 00:20:58.847 } 00:20:58.847 }, 00:20:58.847 { 00:20:58.847 "method": "bdev_nvme_attach_controller", 00:20:58.847 "params": { 00:20:58.847 "name": "TLSTEST", 00:20:58.847 "trtype": "TCP", 00:20:58.847 "adrfam": "IPv4", 00:20:58.847 "traddr": "10.0.0.2", 00:20:58.847 "trsvcid": "4420", 00:20:58.847 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.847 "prchk_reftag": false, 00:20:58.847 "prchk_guard": false, 00:20:58.847 "ctrlr_loss_timeout_sec": 0, 00:20:58.847 "reconnect_delay_sec": 0, 00:20:58.847 "fast_io_fail_timeout_sec": 0, 00:20:58.847 "psk": "/tmp/tmp.tde0PGHzXH", 00:20:58.847 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:58.847 "hdgst": false, 00:20:58.847 "ddgst": false 00:20:58.847 } 00:20:58.847 }, 00:20:58.847 { 00:20:58.847 "method": "bdev_nvme_set_hotplug", 00:20:58.847 "params": { 00:20:58.847 "period_us": 100000, 00:20:58.847 "enable": false 00:20:58.847 } 00:20:58.847 }, 00:20:58.847 { 00:20:58.847 "method": "bdev_wait_for_examine" 00:20:58.847 } 00:20:58.847 ] 00:20:58.847 }, 00:20:58.847 { 00:20:58.847 "subsystem": "nbd", 00:20:58.847 "config": [] 00:20:58.847 } 00:20:58.847 ] 00:20:58.847 }' 00:20:58.847 13:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 86394 00:20:58.848 13:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 86394 ']' 00:20:58.848 13:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 86394 00:20:58.848 13:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:58.848 13:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:58.848 13:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86394 00:20:58.848 13:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:58.848 13:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:58.848 13:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86394' 00:20:58.848 killing process with pid 86394 00:20:58.848 13:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 86394 00:20:58.848 Received shutdown signal, test time was about 10.000000 seconds 00:20:58.848 00:20:58.848 Latency(us) 00:20:58.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.848 =================================================================================================================== 00:20:58.848 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:58.848 [2024-05-15 13:39:11.795927] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' 13:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 86394 00:20:58.848 scheduled for removal in v24.09 hit 1 times 00:20:59.106 13:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 86339 00:20:59.106 13:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 86339 ']' 00:20:59.106 13:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 86339 00:20:59.106 13:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:59.107 13:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:59.107 13:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86339 00:20:59.107 13:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:59.107 13:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:59.107 13:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86339' 00:20:59.107 killing process with pid 86339 00:20:59.107 13:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 86339 00:20:59.107 [2024-05-15 13:39:12.012120] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:59.107 13:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 86339 00:20:59.107 [2024-05-15 13:39:12.012414] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:59.365 13:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:59.365 13:39:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:59.365 13:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:59.365 13:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:59.365 "subsystems": [ 00:20:59.365 { 00:20:59.365 "subsystem": "keyring", 00:20:59.365 "config": [] 00:20:59.365 }, 00:20:59.365 { 00:20:59.365 "subsystem": "iobuf", 00:20:59.365 "config": [ 00:20:59.365 { 00:20:59.365 "method": "iobuf_set_options", 00:20:59.365 "params": { 00:20:59.365 "small_pool_count": 8192, 00:20:59.365 "large_pool_count": 1024, 00:20:59.365 "small_bufsize": 8192, 00:20:59.365 "large_bufsize": 135168 00:20:59.365 } 00:20:59.365 } 00:20:59.365 ] 00:20:59.365 }, 00:20:59.365 { 00:20:59.365 "subsystem": "sock", 00:20:59.365 "config": [ 00:20:59.365 { 00:20:59.365 "method": "sock_impl_set_options", 00:20:59.365 "params": { 00:20:59.365 "impl_name": "uring", 00:20:59.365 "recv_buf_size": 2097152, 00:20:59.365 "send_buf_size": 2097152, 00:20:59.365 "enable_recv_pipe": true, 00:20:59.365 "enable_quickack": false, 00:20:59.365 "enable_placement_id": 0, 00:20:59.365 "enable_zerocopy_send_server": false, 00:20:59.365 "enable_zerocopy_send_client": false, 00:20:59.365 "zerocopy_threshold": 0, 00:20:59.365 "tls_version": 0, 00:20:59.366 "enable_ktls": false 00:20:59.366 } 00:20:59.366 }, 00:20:59.366 { 00:20:59.366 "method": "sock_impl_set_options", 00:20:59.366 "params": { 00:20:59.366 "impl_name": "posix", 00:20:59.366 "recv_buf_size": 2097152, 00:20:59.366 "send_buf_size": 2097152, 00:20:59.366 "enable_recv_pipe": true, 00:20:59.366 "enable_quickack": false, 00:20:59.366 "enable_placement_id": 0, 00:20:59.366 "enable_zerocopy_send_server": true, 00:20:59.366 "enable_zerocopy_send_client": false, 00:20:59.366 "zerocopy_threshold": 0, 00:20:59.366 "tls_version": 0, 00:20:59.366 "enable_ktls": false 00:20:59.366 } 00:20:59.366 }, 00:20:59.366 { 00:20:59.366 "method": "sock_impl_set_options", 00:20:59.366 "params": { 00:20:59.366 "impl_name": "ssl", 00:20:59.366 "recv_buf_size": 4096, 00:20:59.366 "send_buf_size": 4096, 00:20:59.366 "enable_recv_pipe": true, 00:20:59.366 "enable_quickack": false, 00:20:59.366 "enable_placement_id": 0, 00:20:59.366 "enable_zerocopy_send_server": true, 00:20:59.366 "enable_zerocopy_send_client": false, 00:20:59.366 "zerocopy_threshold": 0, 00:20:59.366 "tls_version": 0, 00:20:59.366 "enable_ktls": false 00:20:59.366 } 00:20:59.366 } 00:20:59.366 ] 00:20:59.366 }, 00:20:59.366 { 00:20:59.366 "subsystem": "vmd", 00:20:59.366 "config": [] 00:20:59.366 }, 00:20:59.366 { 00:20:59.366 "subsystem": "accel", 00:20:59.366 "config": [ 00:20:59.366 { 00:20:59.366 "method": "accel_set_options", 00:20:59.366 "params": { 00:20:59.366 "small_cache_size": 128, 00:20:59.366 "large_cache_size": 16, 00:20:59.366 "task_count": 2048, 00:20:59.366 "sequence_count": 2048, 00:20:59.366 "buf_count": 2048 00:20:59.366 } 00:20:59.366 } 00:20:59.366 ] 00:20:59.366 }, 00:20:59.366 { 00:20:59.366 "subsystem": "bdev", 00:20:59.366 "config": [ 00:20:59.366 { 00:20:59.366 "method": "bdev_set_options", 00:20:59.366 "params": { 00:20:59.366 "bdev_io_pool_size": 65535, 00:20:59.366 "bdev_io_cache_size": 256, 00:20:59.366 "bdev_auto_examine": true, 00:20:59.366 "iobuf_small_cache_size": 128, 00:20:59.366 "iobuf_large_cache_size": 16 00:20:59.366 } 00:20:59.366 }, 00:20:59.366 { 00:20:59.366 "method": "bdev_raid_set_options", 00:20:59.366 "params": { 00:20:59.366 "process_window_size_kb": 1024 00:20:59.366 } 00:20:59.366 }, 00:20:59.366 { 00:20:59.366 "method": "bdev_iscsi_set_options", 00:20:59.366 "params": { 00:20:59.366 "timeout_sec": 30 00:20:59.366 } 00:20:59.366 }, 00:20:59.366 { 00:20:59.366 "method": "bdev_nvme_set_options", 00:20:59.366 "params": { 00:20:59.366 "action_on_timeout": "none", 00:20:59.366 "timeout_us": 0, 00:20:59.366 "timeout_admin_us": 0, 00:20:59.366 "keep_alive_timeout_ms": 10000, 00:20:59.366 "arbitration_burst": 0, 00:20:59.366 "low_priority_weight": 0, 00:20:59.366 "medium_priority_weight": 0, 00:20:59.366 "high_priority_weight": 0, 00:20:59.366 "nvme_adminq_poll_period_us": 10000, 00:20:59.366 "nvme_ioq_poll_period_us": 0, 00:20:59.366 "io_queue_requests": 0, 00:20:59.366 "delay_cmd_submit": true, 00:20:59.366 "transport_retry_count": 4, 00:20:59.366 "bdev_retry_count": 3, 00:20:59.366 "transport_ack_timeout": 0, 00:20:59.366 "ctrlr_loss_timeout_sec": 0, 00:20:59.366 "reconnect_delay_sec": 0, 00:20:59.366 "fast_io_fail_timeout_sec": 0, 00:20:59.366 "disable_auto_failback": false, 00:20:59.366 "generate_uuids": false, 00:20:59.366 "transport_tos": 0, 00:20:59.366 "nvme_error_stat": false, 00:20:59.366 "rdma_srq_size": 0, 00:20:59.366 "io_path_stat": false, 00:20:59.366 13:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.366 "allow_accel_sequence": false, 00:20:59.366 "rdma_max_cq_size": 0, 00:20:59.366 "rdma_cm_event_timeout_ms": 0, 00:20:59.366 "dhchap_digests": [ 00:20:59.366 "sha256", 00:20:59.366 "sha384", 00:20:59.366 "sha512" 00:20:59.366 ], 00:20:59.366 "dhchap_dhgroups": [ 00:20:59.366 "null", 00:20:59.366 "ffdhe2048", 00:20:59.366 "ffdhe3072", 00:20:59.366 "ffdhe4096", 00:20:59.366 "ffdhe6144", 00:20:59.366 "ffdhe8192" 00:20:59.366 ] 00:20:59.366 } 00:20:59.366 }, 00:20:59.366 { 00:20:59.366 "method": "bdev_nvme_set_hotplug", 00:20:59.366 "params": { 00:20:59.366 "period_us": 100000, 00:20:59.366 "enable": false 00:20:59.366 } 00:20:59.366 }, 00:20:59.366 { 00:20:59.366 "method": "bdev_malloc_create", 00:20:59.366 "params": { 00:20:59.366 "name": "malloc0", 00:20:59.366 "num_blocks": 8192, 00:20:59.366 "block_size": 4096, 00:20:59.366 "physical_block_size": 4096, 00:20:59.366 "uuid": "0ad39c5c-b483-4742-b22e-d0d78fc10d06", 00:20:59.366 "optimal_io_boundary": 0 00:20:59.366 } 00:20:59.366 }, 00:20:59.366 { 00:20:59.366 "method": "bdev_wait_for_examine" 00:20:59.366 } 00:20:59.366 ] 00:20:59.366 }, 00:20:59.366 { 00:20:59.366 "subsystem": "nbd", 00:20:59.366 "config": [] 00:20:59.366 }, 00:20:59.366 { 00:20:59.366 "subsystem": "scheduler", 00:20:59.366 "config": [ 00:20:59.366 { 00:20:59.366 "method": "framework_set_scheduler", 00:20:59.366 "params": { 00:20:59.366 "name": "static" 00:20:59.366 } 00:20:59.366 } 00:20:59.366 ] 00:20:59.366 }, 00:20:59.366 { 00:20:59.366 "subsystem": "nvmf", 00:20:59.366 "config": [ 00:20:59.366 { 00:20:59.366 "method": "nvmf_set_config", 00:20:59.366 "params": { 00:20:59.366 "discovery_filter": "match_any", 00:20:59.366 "admin_cmd_passthru": { 00:20:59.366 "identify_ctrlr": false 00:20:59.366 } 00:20:59.366 } 00:20:59.366 }, 00:20:59.366 { 00:20:59.366 "method": "nvmf_set_max_subsystems", 00:20:59.366 "params": { 00:20:59.366 "max_subsystems": 1024 00:20:59.366 } 00:20:59.366 }, 00:20:59.366 { 00:20:59.366 "method": "nvmf_set_crdt", 00:20:59.366 "params": { 00:20:59.366 "crdt1": 0, 00:20:59.366 "crdt2": 0, 00:20:59.366 "crdt3": 0 00:20:59.366 } 00:20:59.366 }, 00:20:59.366 { 00:20:59.366 "method": "nvmf_create_transport", 00:20:59.366 "params": { 00:20:59.366 "trtype": "TCP", 00:20:59.366 "max_queue_depth": 128, 00:20:59.366 "max_io_qpairs_per_ctrlr": 127, 00:20:59.366 "in_capsule_data_size": 4096, 00:20:59.366 "max_io_size": 131072, 00:20:59.366 "io_unit_size": 131072, 00:20:59.366 "max_aq_depth": 128, 00:20:59.366 "num_shared_buffers": 511, 00:20:59.366 "buf_cache_size": 4294967295, 00:20:59.366 "dif_insert_or_strip": false, 00:20:59.366 "zcopy": false, 00:20:59.366 "c2h_success": false, 00:20:59.366 "sock_priority": 0, 00:20:59.366 "abort_timeout_sec": 1, 00:20:59.366 "ack_timeout": 0, 00:20:59.366 "data_wr_pool_size": 0 00:20:59.366 } 00:20:59.366 }, 00:20:59.366 { 00:20:59.366 "method": "nvmf_create_subsystem", 00:20:59.366 "params": { 00:20:59.366 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.366 "allow_any_host": false, 00:20:59.366 "serial_number": "SPDK00000000000001", 00:20:59.366 "model_number": "SPDK bdev Controller", 00:20:59.366 "max_namespaces": 10, 00:20:59.366 "min_cntlid": 1, 00:20:59.366 "max_cntlid": 65519, 00:20:59.366 "ana_reporting": false 00:20:59.366 } 00:20:59.366 }, 00:20:59.366 { 00:20:59.366 "method": "nvmf_subsystem_add_host", 00:20:59.366 "params": { 00:20:59.366 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.366 "host": "nqn.2016-06.io.spdk:host1", 00:20:59.366 "psk": "/tmp/tmp.tde0PGHzXH" 00:20:59.366 } 00:20:59.366 }, 00:20:59.366 { 00:20:59.366 "method": "nvmf_subsystem_add_ns", 00:20:59.366 "params": { 00:20:59.366 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.366 "namespace": { 00:20:59.366 "nsid": 1, 00:20:59.366 "bdev_name": "malloc0", 00:20:59.366 "nguid": "0AD39C5CB4834742B22ED0D78FC10D06", 00:20:59.366 "uuid": "0ad39c5c-b483-4742-b22e-d0d78fc10d06", 00:20:59.366 "no_auto_visible": false 00:20:59.366 } 00:20:59.366 } 00:20:59.366 }, 00:20:59.366 { 00:20:59.366 "method": "nvmf_subsystem_add_listener", 00:20:59.366 "params": { 00:20:59.366 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.366 "listen_address": { 00:20:59.366 "trtype": "TCP", 00:20:59.367 "adrfam": "IPv4", 00:20:59.367 "traddr": "10.0.0.2", 00:20:59.367 "trsvcid": "4420" 00:20:59.367 }, 00:20:59.367 "secure_channel": true 00:20:59.367 } 00:20:59.367 } 00:20:59.367 ] 00:20:59.367 } 00:20:59.367 ] 00:20:59.367 }' 00:20:59.367 13:39:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=86429 00:20:59.367 13:39:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:59.367 13:39:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 86429 00:20:59.367 13:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 86429 ']' 00:20:59.367 13:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.367 13:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:59.367 13:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.367 13:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:59.367 13:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.367 [2024-05-15 13:39:12.266314] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:20:59.367 [2024-05-15 13:39:12.266609] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.367 [2024-05-15 13:39:12.389999] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:59.367 [2024-05-15 13:39:12.401464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.367 [2024-05-15 13:39:12.455507] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.367 [2024-05-15 13:39:12.455733] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.367 [2024-05-15 13:39:12.455846] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.367 [2024-05-15 13:39:12.455903] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.367 [2024-05-15 13:39:12.455984] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.367 [2024-05-15 13:39:12.456105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.625 [2024-05-15 13:39:12.661618] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.625 [2024-05-15 13:39:12.677584] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:59.625 [2024-05-15 13:39:12.693543] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:59.625 [2024-05-15 13:39:12.693875] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:59.625 [2024-05-15 13:39:12.694263] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:00.558 13:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:00.558 13:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:00.558 13:39:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:00.558 13:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:00.558 13:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.558 13:39:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.558 13:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=86461 00:21:00.558 13:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 86461 /var/tmp/bdevperf.sock 00:21:00.558 13:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 86461 ']' 00:21:00.558 13:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:00.558 "subsystems": [ 00:21:00.558 { 00:21:00.558 "subsystem": "keyring", 00:21:00.558 "config": [] 00:21:00.558 }, 00:21:00.558 { 00:21:00.558 "subsystem": "iobuf", 00:21:00.558 "config": [ 00:21:00.558 { 00:21:00.558 "method": "iobuf_set_options", 00:21:00.558 "params": { 00:21:00.558 "small_pool_count": 8192, 00:21:00.558 "large_pool_count": 1024, 00:21:00.558 "small_bufsize": 8192, 00:21:00.558 "large_bufsize": 135168 00:21:00.558 } 00:21:00.558 } 00:21:00.558 ] 00:21:00.558 }, 00:21:00.558 { 00:21:00.558 "subsystem": "sock", 00:21:00.558 "config": [ 00:21:00.558 { 00:21:00.558 "method": "sock_impl_set_options", 00:21:00.558 "params": { 00:21:00.558 "impl_name": "uring", 00:21:00.558 "recv_buf_size": 2097152, 00:21:00.558 "send_buf_size": 2097152, 00:21:00.558 "enable_recv_pipe": true, 00:21:00.558 "enable_quickack": false, 00:21:00.558 "enable_placement_id": 0, 00:21:00.558 "enable_zerocopy_send_server": false, 00:21:00.558 "enable_zerocopy_send_client": false, 00:21:00.558 "zerocopy_threshold": 0, 00:21:00.558 "tls_version": 0, 00:21:00.558 "enable_ktls": false 00:21:00.558 } 00:21:00.558 }, 00:21:00.558 { 00:21:00.558 "method": "sock_impl_set_options", 00:21:00.558 "params": { 00:21:00.558 "impl_name": "posix", 00:21:00.558 "recv_buf_size": 2097152, 00:21:00.558 "send_buf_size": 2097152, 00:21:00.558 "enable_recv_pipe": true, 00:21:00.558 "enable_quickack": false, 00:21:00.558 "enable_placement_id": 0, 00:21:00.558 "enable_zerocopy_send_server": true, 00:21:00.558 "enable_zerocopy_send_client": false, 00:21:00.558 "zerocopy_threshold": 0, 00:21:00.558 "tls_version": 0, 00:21:00.558 "enable_ktls": false 00:21:00.558 } 00:21:00.558 }, 00:21:00.558 { 00:21:00.558 "method": "sock_impl_set_options", 00:21:00.558 "params": { 00:21:00.558 "impl_name": "ssl", 00:21:00.558 "recv_buf_size": 4096, 00:21:00.558 "send_buf_size": 4096, 00:21:00.558 "enable_recv_pipe": true, 00:21:00.558 "enable_quickack": false, 00:21:00.558 "enable_placement_id": 0, 00:21:00.559 "enable_zerocopy_send_server": true, 00:21:00.559 "enable_zerocopy_send_client": false, 00:21:00.559 "zerocopy_threshold": 0, 00:21:00.559 "tls_version": 0, 00:21:00.559 "enable_ktls": false 00:21:00.559 } 00:21:00.559 } 00:21:00.559 ] 00:21:00.559 }, 00:21:00.559 { 00:21:00.559 "subsystem": "vmd", 00:21:00.559 "config": [] 00:21:00.559 }, 00:21:00.559 { 00:21:00.559 "subsystem": "accel", 00:21:00.559 "config": [ 00:21:00.559 { 00:21:00.559 "method": "accel_set_options", 00:21:00.559 "params": { 00:21:00.559 "small_cache_size": 128, 00:21:00.559 "large_cache_size": 16, 00:21:00.559 "task_count": 2048, 00:21:00.559 "sequence_count": 2048, 00:21:00.559 "buf_count": 2048 00:21:00.559 } 00:21:00.559 } 00:21:00.559 ] 00:21:00.559 }, 00:21:00.559 { 00:21:00.559 "subsystem": "bdev", 00:21:00.559 "config": [ 00:21:00.559 { 00:21:00.559 "method": "bdev_set_options", 00:21:00.559 "params": { 00:21:00.559 "bdev_io_pool_size": 65535, 00:21:00.559 "bdev_io_cache_size": 256, 00:21:00.559 "bdev_auto_examine": true, 00:21:00.559 "iobuf_small_cache_size": 128, 00:21:00.559 "iobuf_large_cache_size": 16 00:21:00.559 } 00:21:00.559 }, 00:21:00.559 { 00:21:00.559 "method": "bdev_raid_set_options", 00:21:00.559 "params": { 00:21:00.559 "process_window_size_kb": 1024 00:21:00.559 } 00:21:00.559 }, 00:21:00.559 { 00:21:00.559 "method": "bdev_iscsi_set_options", 00:21:00.559 "params": { 00:21:00.559 "timeout_sec": 30 00:21:00.559 } 00:21:00.559 }, 00:21:00.559 { 00:21:00.559 "method": "bdev_nvme_set_options", 00:21:00.559 "params": { 00:21:00.559 "action_on_timeout": "none", 00:21:00.559 "timeout_us": 0, 00:21:00.559 "timeout_admin_us": 0, 00:21:00.559 "keep_alive_timeout_ms": 10000, 00:21:00.559 "arbitration_burst": 0, 00:21:00.559 "low_priority_weight": 0, 00:21:00.559 "medium_priority_weight": 0, 00:21:00.559 "high_priority_weight": 0, 00:21:00.559 "nvme_adminq_poll_period_us": 10000, 00:21:00.559 "nvme_ioq_poll_period_us": 0, 00:21:00.559 "io_queue_requests": 512, 00:21:00.559 "delay_cmd_submit": true, 00:21:00.559 "transport_retry_count": 4, 00:21:00.559 "bdev_retry_count": 3, 00:21:00.559 "transport_ack_timeout": 0, 00:21:00.559 "ctrlr_loss_timeout_sec": 0, 00:21:00.559 "reconnect_delay_sec": 0, 00:21:00.559 "fast_io_fail_timeout_sec": 0, 00:21:00.559 "disable_auto_failback": false, 00:21:00.559 "generate_uuids": false, 00:21:00.559 "transport_tos": 0, 00:21:00.559 "nvme_error_stat": false, 00:21:00.559 "rdma_srq_size": 0, 00:21:00.559 "io_path_stat": false, 00:21:00.559 13:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:00.559 13:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.559 13:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:00.559 "allow_accel_sequence": false, 00:21:00.559 "rdma_max_cq_size": 0, 00:21:00.559 "rdma_cm_event_timeout_ms": 0, 00:21:00.559 "dhchap_digests": [ 00:21:00.559 "sha256", 00:21:00.559 "sha384", 00:21:00.559 "sha512" 00:21:00.559 ], 00:21:00.559 "dhchap_dhgroups": [ 00:21:00.559 "null", 00:21:00.559 "ffdhe2048", 00:21:00.559 "ffdhe3072", 00:21:00.559 "ffdhe4096", 00:21:00.559 "ffdhe6144", 00:21:00.559 "ffdhe8192" 00:21:00.559 ] 00:21:00.559 } 00:21:00.559 }, 00:21:00.559 { 00:21:00.559 "method": "bdev_nvme_attach_controller", 00:21:00.559 "params": { 00:21:00.559 "name": "TLSTEST", 00:21:00.559 "trtype": "TCP", 00:21:00.559 "adrfam": "IPv4", 00:21:00.559 "traddr": "10.0.0.2", 00:21:00.559 "trsvcid": "4420", 00:21:00.559 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.559 "prchk_reftag": false, 00:21:00.559 "prchk_guard": false, 00:21:00.559 "ctrlr_loss_timeout_sec": 0, 00:21:00.559 "reconnect_delay_sec": 0, 00:21:00.559 "fast_io_fail_timeout_sec": 0, 00:21:00.559 "psk": "/tmp/tmp.tde0PGHzXH", 00:21:00.559 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:00.559 "hdgst": false, 00:21:00.559 "ddgst": false 00:21:00.559 } 00:21:00.559 }, 00:21:00.559 { 00:21:00.559 "method": "bdev_nvme_set_hotplug", 00:21:00.559 "params": { 00:21:00.559 "period_us": 100000, 00:21:00.559 "enable": false 00:21:00.559 } 00:21:00.559 }, 00:21:00.559 { 00:21:00.559 "method": "bdev_wait_for_examine" 00:21:00.559 } 00:21:00.559 ] 00:21:00.559 }, 00:21:00.559 { 00:21:00.559 "subsystem": "nbd", 00:21:00.559 "config": [] 00:21:00.559 } 00:21:00.559 ] 00:21:00.559 }' 00:21:00.559 13:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.559 13:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:00.559 13:39:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.559 [2024-05-15 13:39:13.389549] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:21:00.559 [2024-05-15 13:39:13.389878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86461 ] 00:21:00.559 [2024-05-15 13:39:13.517947] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:00.559 [2024-05-15 13:39:13.537986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.559 [2024-05-15 13:39:13.596004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.817 [2024-05-15 13:39:13.749075] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:00.817 [2024-05-15 13:39:13.749734] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:01.385 13:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:01.385 13:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:01.385 13:39:14 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:01.643 Running I/O for 10 seconds... 00:21:11.618 00:21:11.618 Latency(us) 00:21:11.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.618 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:11.618 Verification LBA range: start 0x0 length 0x2000 00:21:11.618 TLSTESTn1 : 10.01 5063.64 19.78 0.00 0.00 25233.95 5305.30 213709.78 00:21:11.618 =================================================================================================================== 00:21:11.618 Total : 5063.64 19.78 0.00 0.00 25233.95 5305.30 213709.78 00:21:11.618 0 00:21:11.618 13:39:24 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:11.618 13:39:24 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 86461 00:21:11.618 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 86461 ']' 00:21:11.618 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 86461 00:21:11.618 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:11.618 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:11.618 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86461 00:21:11.618 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:11.618 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:11.618 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86461' 00:21:11.618 killing process with pid 86461 00:21:11.618 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 86461 00:21:11.618 Received shutdown signal, test time was about 10.000000 seconds 00:21:11.618 00:21:11.618 Latency(us) 00:21:11.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.618 =================================================================================================================== 00:21:11.618 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:11.618 [2024-05-15 13:39:24.551158] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 86461 00:21:11.618 scheduled for removal in v24.09 hit 1 times 00:21:11.876 13:39:24 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 86429 00:21:11.876 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 86429 ']' 00:21:11.876 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 86429 00:21:11.876 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:11.876 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:11.876 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86429 00:21:11.876 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:11.876 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:11.876 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86429' 00:21:11.876 killing process with pid 86429 00:21:11.876 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 86429 00:21:11.876 [2024-05-15 13:39:24.772831] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:11.876 [2024-05-15 13:39:24.773065] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 86429 00:21:11.876 removal in v24.09 hit 1 times 00:21:11.876 13:39:24 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:11.876 13:39:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:11.876 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:11.876 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.190 13:39:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=86600 00:21:12.190 13:39:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:12.190 13:39:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 86600 00:21:12.190 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 86600 ']' 00:21:12.190 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.190 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:12.190 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.190 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:12.190 13:39:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.190 [2024-05-15 13:39:25.035137] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:21:12.190 [2024-05-15 13:39:25.035511] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.190 [2024-05-15 13:39:25.166590] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:12.190 [2024-05-15 13:39:25.185457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.190 [2024-05-15 13:39:25.243324] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.190 [2024-05-15 13:39:25.243598] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.190 [2024-05-15 13:39:25.243742] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.190 [2024-05-15 13:39:25.243814] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.190 [2024-05-15 13:39:25.243854] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.190 [2024-05-15 13:39:25.244026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.124 13:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:13.124 13:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:13.124 13:39:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:13.124 13:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:13.124 13:39:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.124 13:39:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.124 13:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.tde0PGHzXH 00:21:13.124 13:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.tde0PGHzXH 00:21:13.124 13:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:13.124 [2024-05-15 13:39:26.218267] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.382 13:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:13.640 13:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:13.640 [2024-05-15 13:39:26.738319] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:13.640 [2024-05-15 13:39:26.738686] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:13.640 [2024-05-15 13:39:26.738956] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.898 13:39:26 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:14.156 malloc0 00:21:14.156 13:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:14.414 13:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tde0PGHzXH 00:21:14.672 [2024-05-15 13:39:27.616356] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:14.672 13:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=86649 00:21:14.672 13:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:14.672 13:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:14.672 13:39:27 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 86649 /var/tmp/bdevperf.sock 00:21:14.672 13:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 86649 ']' 00:21:14.672 13:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:14.672 13:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:14.672 13:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:14.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:14.672 13:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:14.672 13:39:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.672 [2024-05-15 13:39:27.713855] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:21:14.672 [2024-05-15 13:39:27.714414] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86649 ] 00:21:14.930 [2024-05-15 13:39:27.856037] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:14.930 [2024-05-15 13:39:27.874100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.930 [2024-05-15 13:39:27.959526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.888 13:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:15.888 13:39:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:15.888 13:39:28 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tde0PGHzXH 00:21:16.146 13:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:16.403 [2024-05-15 13:39:29.277827] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:16.403 nvme0n1 00:21:16.403 13:39:29 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:16.403 Running I/O for 1 seconds... 00:21:17.775 00:21:17.775 Latency(us) 00:21:17.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.775 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:17.775 Verification LBA range: start 0x0 length 0x2000 00:21:17.775 nvme0n1 : 1.01 5002.70 19.54 0.00 0.00 25407.23 3651.29 18849.40 00:21:17.775 =================================================================================================================== 00:21:17.775 Total : 5002.70 19.54 0.00 0.00 25407.23 3651.29 18849.40 00:21:17.775 0 00:21:17.775 13:39:30 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 86649 00:21:17.775 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 86649 ']' 00:21:17.775 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 86649 00:21:17.775 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:17.775 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:17.775 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86649 00:21:17.775 killing process with pid 86649 00:21:17.775 Received shutdown signal, test time was about 1.000000 seconds 00:21:17.775 00:21:17.775 Latency(us) 00:21:17.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.775 =================================================================================================================== 00:21:17.775 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:17.775 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:17.775 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:17.775 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86649' 00:21:17.775 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 86649 00:21:17.775 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 86649 00:21:17.775 13:39:30 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 86600 00:21:17.775 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 86600 ']' 00:21:17.775 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 86600 00:21:17.775 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:17.775 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:17.775 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86600 00:21:17.775 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:17.775 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:17.775 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86600' 00:21:17.775 killing process with pid 86600 00:21:17.775 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 86600 00:21:17.776 [2024-05-15 13:39:30.755195] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]addres 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 86600 00:21:17.776 s.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:17.776 [2024-05-15 13:39:30.756330] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:18.034 13:39:30 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:21:18.034 13:39:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:18.034 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:18.034 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.034 13:39:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=86706 00:21:18.034 13:39:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 86706 00:21:18.034 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 86706 ']' 00:21:18.034 13:39:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:18.034 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.034 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:18.034 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.034 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:18.034 13:39:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.034 [2024-05-15 13:39:31.013562] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:21:18.034 [2024-05-15 13:39:31.013888] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.292 [2024-05-15 13:39:31.136038] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:18.293 [2024-05-15 13:39:31.151303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.293 [2024-05-15 13:39:31.204389] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.293 [2024-05-15 13:39:31.204682] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.293 [2024-05-15 13:39:31.204816] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.293 [2024-05-15 13:39:31.204909] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.293 [2024-05-15 13:39:31.204943] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.293 [2024-05-15 13:39:31.205044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.858 13:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:18.858 13:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:18.858 13:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:18.858 13:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:18.858 13:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.117 13:39:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.117 13:39:31 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:21:19.117 13:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.117 13:39:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.117 [2024-05-15 13:39:31.988681] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.117 malloc0 00:21:19.117 [2024-05-15 13:39:32.026271] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:19.117 [2024-05-15 13:39:32.026737] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:19.117 [2024-05-15 13:39:32.027211] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.117 13:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.117 13:39:32 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=86738 00:21:19.117 13:39:32 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:19.117 13:39:32 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 86738 /var/tmp/bdevperf.sock 00:21:19.117 13:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 86738 ']' 00:21:19.117 13:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:19.117 13:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:19.117 13:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:19.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:19.117 13:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:19.117 13:39:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.117 [2024-05-15 13:39:32.102847] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:21:19.117 [2024-05-15 13:39:32.103156] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86738 ] 00:21:19.374 [2024-05-15 13:39:32.225965] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:19.374 [2024-05-15 13:39:32.236847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.374 [2024-05-15 13:39:32.290327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.306 13:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:20.306 13:39:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:20.306 13:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tde0PGHzXH 00:21:20.568 13:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:20.568 [2024-05-15 13:39:33.658436] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:20.826 nvme0n1 00:21:20.826 13:39:33 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:20.826 Running I/O for 1 seconds... 00:21:21.764 00:21:21.764 Latency(us) 00:21:21.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.764 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:21.764 Verification LBA range: start 0x0 length 0x2000 00:21:21.764 nvme0n1 : 1.01 4418.40 17.26 0.00 0.00 28724.61 5242.88 19723.22 00:21:21.764 =================================================================================================================== 00:21:21.764 Total : 4418.40 17.26 0.00 0.00 28724.61 5242.88 19723.22 00:21:21.764 0 00:21:22.023 13:39:34 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:22.023 13:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.023 13:39:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.023 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.023 13:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:21:22.023 "subsystems": [ 00:21:22.023 { 00:21:22.023 "subsystem": "keyring", 00:21:22.023 "config": [ 00:21:22.023 { 00:21:22.023 "method": "keyring_file_add_key", 00:21:22.023 "params": { 00:21:22.023 "name": "key0", 00:21:22.023 "path": "/tmp/tmp.tde0PGHzXH" 00:21:22.023 } 00:21:22.023 } 00:21:22.023 ] 00:21:22.023 }, 00:21:22.023 { 00:21:22.023 "subsystem": "iobuf", 00:21:22.023 "config": [ 00:21:22.023 { 00:21:22.023 "method": "iobuf_set_options", 00:21:22.023 "params": { 00:21:22.023 "small_pool_count": 8192, 00:21:22.023 "large_pool_count": 1024, 00:21:22.023 "small_bufsize": 8192, 00:21:22.023 "large_bufsize": 135168 00:21:22.023 } 00:21:22.023 } 00:21:22.023 ] 00:21:22.023 }, 00:21:22.023 { 00:21:22.023 "subsystem": "sock", 00:21:22.023 "config": [ 00:21:22.023 { 00:21:22.023 "method": "sock_impl_set_options", 00:21:22.023 "params": { 00:21:22.023 "impl_name": "uring", 00:21:22.023 "recv_buf_size": 2097152, 00:21:22.023 "send_buf_size": 2097152, 00:21:22.023 "enable_recv_pipe": true, 00:21:22.023 "enable_quickack": false, 00:21:22.023 "enable_placement_id": 0, 00:21:22.023 "enable_zerocopy_send_server": false, 00:21:22.023 "enable_zerocopy_send_client": false, 00:21:22.023 "zerocopy_threshold": 0, 00:21:22.023 "tls_version": 0, 00:21:22.023 "enable_ktls": false 00:21:22.023 } 00:21:22.023 }, 00:21:22.023 { 00:21:22.023 "method": "sock_impl_set_options", 00:21:22.023 "params": { 00:21:22.023 "impl_name": "posix", 00:21:22.023 "recv_buf_size": 2097152, 00:21:22.023 "send_buf_size": 2097152, 00:21:22.023 "enable_recv_pipe": true, 00:21:22.023 "enable_quickack": false, 00:21:22.023 "enable_placement_id": 0, 00:21:22.023 "enable_zerocopy_send_server": true, 00:21:22.023 "enable_zerocopy_send_client": false, 00:21:22.023 "zerocopy_threshold": 0, 00:21:22.023 "tls_version": 0, 00:21:22.023 "enable_ktls": false 00:21:22.023 } 00:21:22.023 }, 00:21:22.023 { 00:21:22.023 "method": "sock_impl_set_options", 00:21:22.023 "params": { 00:21:22.023 "impl_name": "ssl", 00:21:22.023 "recv_buf_size": 4096, 00:21:22.023 "send_buf_size": 4096, 00:21:22.023 "enable_recv_pipe": true, 00:21:22.023 "enable_quickack": false, 00:21:22.023 "enable_placement_id": 0, 00:21:22.023 "enable_zerocopy_send_server": true, 00:21:22.023 "enable_zerocopy_send_client": false, 00:21:22.023 "zerocopy_threshold": 0, 00:21:22.023 "tls_version": 0, 00:21:22.023 "enable_ktls": false 00:21:22.023 } 00:21:22.023 } 00:21:22.023 ] 00:21:22.023 }, 00:21:22.023 { 00:21:22.023 "subsystem": "vmd", 00:21:22.023 "config": [] 00:21:22.023 }, 00:21:22.023 { 00:21:22.023 "subsystem": "accel", 00:21:22.023 "config": [ 00:21:22.023 { 00:21:22.023 "method": "accel_set_options", 00:21:22.023 "params": { 00:21:22.023 "small_cache_size": 128, 00:21:22.023 "large_cache_size": 16, 00:21:22.023 "task_count": 2048, 00:21:22.023 "sequence_count": 2048, 00:21:22.023 "buf_count": 2048 00:21:22.023 } 00:21:22.023 } 00:21:22.023 ] 00:21:22.023 }, 00:21:22.023 { 00:21:22.023 "subsystem": "bdev", 00:21:22.023 "config": [ 00:21:22.023 { 00:21:22.023 "method": "bdev_set_options", 00:21:22.023 "params": { 00:21:22.023 "bdev_io_pool_size": 65535, 00:21:22.023 "bdev_io_cache_size": 256, 00:21:22.023 "bdev_auto_examine": true, 00:21:22.023 "iobuf_small_cache_size": 128, 00:21:22.023 "iobuf_large_cache_size": 16 00:21:22.023 } 00:21:22.023 }, 00:21:22.023 { 00:21:22.023 "method": "bdev_raid_set_options", 00:21:22.023 "params": { 00:21:22.023 "process_window_size_kb": 1024 00:21:22.023 } 00:21:22.023 }, 00:21:22.023 { 00:21:22.023 "method": "bdev_iscsi_set_options", 00:21:22.023 "params": { 00:21:22.023 "timeout_sec": 30 00:21:22.023 } 00:21:22.023 }, 00:21:22.023 { 00:21:22.023 "method": "bdev_nvme_set_options", 00:21:22.023 "params": { 00:21:22.023 "action_on_timeout": "none", 00:21:22.023 "timeout_us": 0, 00:21:22.023 "timeout_admin_us": 0, 00:21:22.024 "keep_alive_timeout_ms": 10000, 00:21:22.024 "arbitration_burst": 0, 00:21:22.024 "low_priority_weight": 0, 00:21:22.024 "medium_priority_weight": 0, 00:21:22.024 "high_priority_weight": 0, 00:21:22.024 "nvme_adminq_poll_period_us": 10000, 00:21:22.024 "nvme_ioq_poll_period_us": 0, 00:21:22.024 "io_queue_requests": 0, 00:21:22.024 "delay_cmd_submit": true, 00:21:22.024 "transport_retry_count": 4, 00:21:22.024 "bdev_retry_count": 3, 00:21:22.024 "transport_ack_timeout": 0, 00:21:22.024 "ctrlr_loss_timeout_sec": 0, 00:21:22.024 "reconnect_delay_sec": 0, 00:21:22.024 "fast_io_fail_timeout_sec": 0, 00:21:22.024 "disable_auto_failback": false, 00:21:22.024 "generate_uuids": false, 00:21:22.024 "transport_tos": 0, 00:21:22.024 "nvme_error_stat": false, 00:21:22.024 "rdma_srq_size": 0, 00:21:22.024 "io_path_stat": false, 00:21:22.024 "allow_accel_sequence": false, 00:21:22.024 "rdma_max_cq_size": 0, 00:21:22.024 "rdma_cm_event_timeout_ms": 0, 00:21:22.024 "dhchap_digests": [ 00:21:22.024 "sha256", 00:21:22.024 "sha384", 00:21:22.024 "sha512" 00:21:22.024 ], 00:21:22.024 "dhchap_dhgroups": [ 00:21:22.024 "null", 00:21:22.024 "ffdhe2048", 00:21:22.024 "ffdhe3072", 00:21:22.024 "ffdhe4096", 00:21:22.024 "ffdhe6144", 00:21:22.024 "ffdhe8192" 00:21:22.024 ] 00:21:22.024 } 00:21:22.024 }, 00:21:22.024 { 00:21:22.024 "method": "bdev_nvme_set_hotplug", 00:21:22.024 "params": { 00:21:22.024 "period_us": 100000, 00:21:22.024 "enable": false 00:21:22.024 } 00:21:22.024 }, 00:21:22.024 { 00:21:22.024 "method": "bdev_malloc_create", 00:21:22.024 "params": { 00:21:22.024 "name": "malloc0", 00:21:22.024 "num_blocks": 8192, 00:21:22.024 "block_size": 4096, 00:21:22.024 "physical_block_size": 4096, 00:21:22.024 "uuid": "bf1aa020-2e5b-4163-9197-f8ba20b61f9b", 00:21:22.024 "optimal_io_boundary": 0 00:21:22.024 } 00:21:22.024 }, 00:21:22.024 { 00:21:22.024 "method": "bdev_wait_for_examine" 00:21:22.024 } 00:21:22.024 ] 00:21:22.024 }, 00:21:22.024 { 00:21:22.024 "subsystem": "nbd", 00:21:22.024 "config": [] 00:21:22.024 }, 00:21:22.024 { 00:21:22.024 "subsystem": "scheduler", 00:21:22.024 "config": [ 00:21:22.024 { 00:21:22.024 "method": "framework_set_scheduler", 00:21:22.024 "params": { 00:21:22.024 "name": "static" 00:21:22.024 } 00:21:22.024 } 00:21:22.024 ] 00:21:22.024 }, 00:21:22.024 { 00:21:22.024 "subsystem": "nvmf", 00:21:22.024 "config": [ 00:21:22.024 { 00:21:22.024 "method": "nvmf_set_config", 00:21:22.024 "params": { 00:21:22.024 "discovery_filter": "match_any", 00:21:22.024 "admin_cmd_passthru": { 00:21:22.024 "identify_ctrlr": false 00:21:22.024 } 00:21:22.024 } 00:21:22.024 }, 00:21:22.024 { 00:21:22.024 "method": "nvmf_set_max_subsystems", 00:21:22.024 "params": { 00:21:22.024 "max_subsystems": 1024 00:21:22.024 } 00:21:22.024 }, 00:21:22.024 { 00:21:22.024 "method": "nvmf_set_crdt", 00:21:22.024 "params": { 00:21:22.024 "crdt1": 0, 00:21:22.024 "crdt2": 0, 00:21:22.024 "crdt3": 0 00:21:22.024 } 00:21:22.024 }, 00:21:22.024 { 00:21:22.024 "method": "nvmf_create_transport", 00:21:22.024 "params": { 00:21:22.024 "trtype": "TCP", 00:21:22.024 "max_queue_depth": 128, 00:21:22.024 "max_io_qpairs_per_ctrlr": 127, 00:21:22.024 "in_capsule_data_size": 4096, 00:21:22.024 "max_io_size": 131072, 00:21:22.024 "io_unit_size": 131072, 00:21:22.024 "max_aq_depth": 128, 00:21:22.024 "num_shared_buffers": 511, 00:21:22.024 "buf_cache_size": 4294967295, 00:21:22.024 "dif_insert_or_strip": false, 00:21:22.024 "zcopy": false, 00:21:22.024 "c2h_success": false, 00:21:22.024 "sock_priority": 0, 00:21:22.024 "abort_timeout_sec": 1, 00:21:22.024 "ack_timeout": 0, 00:21:22.024 "data_wr_pool_size": 0 00:21:22.024 } 00:21:22.024 }, 00:21:22.024 { 00:21:22.024 "method": "nvmf_create_subsystem", 00:21:22.024 "params": { 00:21:22.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.024 "allow_any_host": false, 00:21:22.024 "serial_number": "00000000000000000000", 00:21:22.024 "model_number": "SPDK bdev Controller", 00:21:22.024 "max_namespaces": 32, 00:21:22.024 "min_cntlid": 1, 00:21:22.024 "max_cntlid": 65519, 00:21:22.024 "ana_reporting": false 00:21:22.024 } 00:21:22.024 }, 00:21:22.024 { 00:21:22.024 "method": "nvmf_subsystem_add_host", 00:21:22.024 "params": { 00:21:22.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.024 "host": "nqn.2016-06.io.spdk:host1", 00:21:22.024 "psk": "key0" 00:21:22.024 } 00:21:22.024 }, 00:21:22.024 { 00:21:22.024 "method": "nvmf_subsystem_add_ns", 00:21:22.024 "params": { 00:21:22.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.024 "namespace": { 00:21:22.024 "nsid": 1, 00:21:22.024 "bdev_name": "malloc0", 00:21:22.024 "nguid": "BF1AA0202E5B41639197F8BA20B61F9B", 00:21:22.024 "uuid": "bf1aa020-2e5b-4163-9197-f8ba20b61f9b", 00:21:22.024 "no_auto_visible": false 00:21:22.024 } 00:21:22.024 } 00:21:22.024 }, 00:21:22.024 { 00:21:22.024 "method": "nvmf_subsystem_add_listener", 00:21:22.024 "params": { 00:21:22.024 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.024 "listen_address": { 00:21:22.024 "trtype": "TCP", 00:21:22.024 "adrfam": "IPv4", 00:21:22.024 "traddr": "10.0.0.2", 00:21:22.024 "trsvcid": "4420" 00:21:22.024 }, 00:21:22.024 "secure_channel": true 00:21:22.024 } 00:21:22.024 } 00:21:22.024 ] 00:21:22.024 } 00:21:22.024 ] 00:21:22.024 }' 00:21:22.024 13:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:22.334 13:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:21:22.334 "subsystems": [ 00:21:22.334 { 00:21:22.334 "subsystem": "keyring", 00:21:22.334 "config": [ 00:21:22.334 { 00:21:22.334 "method": "keyring_file_add_key", 00:21:22.334 "params": { 00:21:22.334 "name": "key0", 00:21:22.334 "path": "/tmp/tmp.tde0PGHzXH" 00:21:22.334 } 00:21:22.334 } 00:21:22.334 ] 00:21:22.334 }, 00:21:22.334 { 00:21:22.334 "subsystem": "iobuf", 00:21:22.334 "config": [ 00:21:22.334 { 00:21:22.334 "method": "iobuf_set_options", 00:21:22.334 "params": { 00:21:22.334 "small_pool_count": 8192, 00:21:22.334 "large_pool_count": 1024, 00:21:22.334 "small_bufsize": 8192, 00:21:22.334 "large_bufsize": 135168 00:21:22.334 } 00:21:22.334 } 00:21:22.334 ] 00:21:22.334 }, 00:21:22.334 { 00:21:22.334 "subsystem": "sock", 00:21:22.334 "config": [ 00:21:22.334 { 00:21:22.334 "method": "sock_impl_set_options", 00:21:22.334 "params": { 00:21:22.334 "impl_name": "uring", 00:21:22.334 "recv_buf_size": 2097152, 00:21:22.334 "send_buf_size": 2097152, 00:21:22.334 "enable_recv_pipe": true, 00:21:22.334 "enable_quickack": false, 00:21:22.334 "enable_placement_id": 0, 00:21:22.334 "enable_zerocopy_send_server": false, 00:21:22.334 "enable_zerocopy_send_client": false, 00:21:22.334 "zerocopy_threshold": 0, 00:21:22.334 "tls_version": 0, 00:21:22.334 "enable_ktls": false 00:21:22.334 } 00:21:22.334 }, 00:21:22.334 { 00:21:22.334 "method": "sock_impl_set_options", 00:21:22.334 "params": { 00:21:22.334 "impl_name": "posix", 00:21:22.334 "recv_buf_size": 2097152, 00:21:22.334 "send_buf_size": 2097152, 00:21:22.334 "enable_recv_pipe": true, 00:21:22.334 "enable_quickack": false, 00:21:22.334 "enable_placement_id": 0, 00:21:22.334 "enable_zerocopy_send_server": true, 00:21:22.334 "enable_zerocopy_send_client": false, 00:21:22.334 "zerocopy_threshold": 0, 00:21:22.334 "tls_version": 0, 00:21:22.334 "enable_ktls": false 00:21:22.334 } 00:21:22.334 }, 00:21:22.334 { 00:21:22.334 "method": "sock_impl_set_options", 00:21:22.334 "params": { 00:21:22.334 "impl_name": "ssl", 00:21:22.334 "recv_buf_size": 4096, 00:21:22.334 "send_buf_size": 4096, 00:21:22.334 "enable_recv_pipe": true, 00:21:22.334 "enable_quickack": false, 00:21:22.334 "enable_placement_id": 0, 00:21:22.334 "enable_zerocopy_send_server": true, 00:21:22.334 "enable_zerocopy_send_client": false, 00:21:22.334 "zerocopy_threshold": 0, 00:21:22.334 "tls_version": 0, 00:21:22.335 "enable_ktls": false 00:21:22.335 } 00:21:22.335 } 00:21:22.335 ] 00:21:22.335 }, 00:21:22.335 { 00:21:22.335 "subsystem": "vmd", 00:21:22.335 "config": [] 00:21:22.335 }, 00:21:22.335 { 00:21:22.335 "subsystem": "accel", 00:21:22.335 "config": [ 00:21:22.335 { 00:21:22.335 "method": "accel_set_options", 00:21:22.335 "params": { 00:21:22.335 "small_cache_size": 128, 00:21:22.335 "large_cache_size": 16, 00:21:22.335 "task_count": 2048, 00:21:22.335 "sequence_count": 2048, 00:21:22.335 "buf_count": 2048 00:21:22.335 } 00:21:22.335 } 00:21:22.335 ] 00:21:22.335 }, 00:21:22.335 { 00:21:22.335 "subsystem": "bdev", 00:21:22.335 "config": [ 00:21:22.335 { 00:21:22.335 "method": "bdev_set_options", 00:21:22.335 "params": { 00:21:22.335 "bdev_io_pool_size": 65535, 00:21:22.335 "bdev_io_cache_size": 256, 00:21:22.335 "bdev_auto_examine": true, 00:21:22.335 "iobuf_small_cache_size": 128, 00:21:22.335 "iobuf_large_cache_size": 16 00:21:22.335 } 00:21:22.335 }, 00:21:22.335 { 00:21:22.335 "method": "bdev_raid_set_options", 00:21:22.335 "params": { 00:21:22.335 "process_window_size_kb": 1024 00:21:22.335 } 00:21:22.335 }, 00:21:22.335 { 00:21:22.335 "method": "bdev_iscsi_set_options", 00:21:22.335 "params": { 00:21:22.335 "timeout_sec": 30 00:21:22.335 } 00:21:22.335 }, 00:21:22.335 { 00:21:22.335 "method": "bdev_nvme_set_options", 00:21:22.335 "params": { 00:21:22.335 "action_on_timeout": "none", 00:21:22.335 "timeout_us": 0, 00:21:22.335 "timeout_admin_us": 0, 00:21:22.335 "keep_alive_timeout_ms": 10000, 00:21:22.335 "arbitration_burst": 0, 00:21:22.335 "low_priority_weight": 0, 00:21:22.335 "medium_priority_weight": 0, 00:21:22.335 "high_priority_weight": 0, 00:21:22.335 "nvme_adminq_poll_period_us": 10000, 00:21:22.335 "nvme_ioq_poll_period_us": 0, 00:21:22.335 "io_queue_requests": 512, 00:21:22.335 "delay_cmd_submit": true, 00:21:22.335 "transport_retry_count": 4, 00:21:22.335 "bdev_retry_count": 3, 00:21:22.335 "transport_ack_timeout": 0, 00:21:22.335 "ctrlr_loss_timeout_sec": 0, 00:21:22.335 "reconnect_delay_sec": 0, 00:21:22.335 "fast_io_fail_timeout_sec": 0, 00:21:22.335 "disable_auto_failback": false, 00:21:22.335 "generate_uuids": false, 00:21:22.335 "transport_tos": 0, 00:21:22.335 "nvme_error_stat": false, 00:21:22.335 "rdma_srq_size": 0, 00:21:22.335 "io_path_stat": false, 00:21:22.335 "allow_accel_sequence": false, 00:21:22.335 "rdma_max_cq_size": 0, 00:21:22.335 "rdma_cm_event_timeout_ms": 0, 00:21:22.335 "dhchap_digests": [ 00:21:22.335 "sha256", 00:21:22.335 "sha384", 00:21:22.335 "sha512" 00:21:22.335 ], 00:21:22.335 "dhchap_dhgroups": [ 00:21:22.335 "null", 00:21:22.335 "ffdhe2048", 00:21:22.335 "ffdhe3072", 00:21:22.335 "ffdhe4096", 00:21:22.335 "ffdhe6144", 00:21:22.335 "ffdhe8192" 00:21:22.335 ] 00:21:22.335 } 00:21:22.335 }, 00:21:22.335 { 00:21:22.335 "method": "bdev_nvme_attach_controller", 00:21:22.335 "params": { 00:21:22.335 "name": "nvme0", 00:21:22.335 "trtype": "TCP", 00:21:22.335 "adrfam": "IPv4", 00:21:22.335 "traddr": "10.0.0.2", 00:21:22.335 "trsvcid": "4420", 00:21:22.335 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.335 "prchk_reftag": false, 00:21:22.335 "prchk_guard": false, 00:21:22.335 "ctrlr_loss_timeout_sec": 0, 00:21:22.335 "reconnect_delay_sec": 0, 00:21:22.335 "fast_io_fail_timeout_sec": 0, 00:21:22.335 "psk": "key0", 00:21:22.335 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:22.335 "hdgst": false, 00:21:22.335 "ddgst": false 00:21:22.335 } 00:21:22.335 }, 00:21:22.335 { 00:21:22.335 "method": "bdev_nvme_set_hotplug", 00:21:22.335 "params": { 00:21:22.335 "period_us": 100000, 00:21:22.335 "enable": false 00:21:22.335 } 00:21:22.335 }, 00:21:22.335 { 00:21:22.335 "method": "bdev_enable_histogram", 00:21:22.335 "params": { 00:21:22.335 "name": "nvme0n1", 00:21:22.335 "enable": true 00:21:22.335 } 00:21:22.335 }, 00:21:22.335 { 00:21:22.335 "method": "bdev_wait_for_examine" 00:21:22.335 } 00:21:22.335 ] 00:21:22.335 }, 00:21:22.335 { 00:21:22.335 "subsystem": "nbd", 00:21:22.335 "config": [] 00:21:22.335 } 00:21:22.335 ] 00:21:22.335 }' 00:21:22.335 13:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 86738 00:21:22.335 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 86738 ']' 00:21:22.623 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 86738 00:21:22.623 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:22.623 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:22.623 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86738 00:21:22.623 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:22.623 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:22.623 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86738' 00:21:22.623 killing process with pid 86738 00:21:22.623 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 86738 00:21:22.623 Received shutdown signal, test time was about 1.000000 seconds 00:21:22.623 00:21:22.623 Latency(us) 00:21:22.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.623 =================================================================================================================== 00:21:22.623 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:22.623 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 86738 00:21:22.623 13:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 86706 00:21:22.623 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 86706 ']' 00:21:22.623 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 86706 00:21:22.623 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:22.624 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:22.624 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86706 00:21:22.624 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:22.624 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:22.624 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86706' 00:21:22.624 killing process with pid 86706 00:21:22.624 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 86706 00:21:22.624 [2024-05-15 13:39:35.669975] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]addres 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 86706 00:21:22.624 s.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:22.883 13:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:22.884 13:39:35 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:21:22.884 "subsystems": [ 00:21:22.884 { 00:21:22.884 "subsystem": "keyring", 00:21:22.884 "config": [ 00:21:22.884 { 00:21:22.884 "method": "keyring_file_add_key", 00:21:22.884 "params": { 00:21:22.884 "name": "key0", 00:21:22.884 "path": "/tmp/tmp.tde0PGHzXH" 00:21:22.884 } 00:21:22.884 } 00:21:22.884 ] 00:21:22.884 }, 00:21:22.884 { 00:21:22.884 "subsystem": "iobuf", 00:21:22.884 "config": [ 00:21:22.884 { 00:21:22.884 "method": "iobuf_set_options", 00:21:22.884 "params": { 00:21:22.884 "small_pool_count": 8192, 00:21:22.884 "large_pool_count": 1024, 00:21:22.884 "small_bufsize": 8192, 00:21:22.884 "large_bufsize": 135168 00:21:22.884 } 00:21:22.884 } 00:21:22.884 ] 00:21:22.884 }, 00:21:22.884 { 00:21:22.884 "subsystem": "sock", 00:21:22.884 "config": [ 00:21:22.884 { 00:21:22.884 "method": "sock_impl_set_options", 00:21:22.884 "params": { 00:21:22.884 "impl_name": "uring", 00:21:22.884 "recv_buf_size": 2097152, 00:21:22.884 "send_buf_size": 2097152, 00:21:22.884 "enable_recv_pipe": true, 00:21:22.884 "enable_quickack": false, 00:21:22.884 "enable_placement_id": 0, 00:21:22.884 "enable_zerocopy_send_server": false, 00:21:22.884 "enable_zerocopy_send_client": false, 00:21:22.884 "zerocopy_threshold": 0, 00:21:22.884 "tls_version": 0, 00:21:22.884 "enable_ktls": false 00:21:22.884 } 00:21:22.884 }, 00:21:22.884 { 00:21:22.884 "method": "sock_impl_set_options", 00:21:22.884 "params": { 00:21:22.884 "impl_name": "posix", 00:21:22.884 "recv_buf_size": 2097152, 00:21:22.884 "send_buf_size": 2097152, 00:21:22.884 "enable_recv_pipe": true, 00:21:22.884 "enable_quickack": false, 00:21:22.884 "enable_placement_id": 0, 00:21:22.884 "enable_zerocopy_send_server": true, 00:21:22.884 "enable_zerocopy_send_client": false, 00:21:22.884 "zerocopy_threshold": 0, 00:21:22.884 "tls_version": 0, 00:21:22.884 "enable_ktls": false 00:21:22.884 } 00:21:22.884 }, 00:21:22.884 { 00:21:22.884 "method": "sock_impl_set_options", 00:21:22.884 "params": { 00:21:22.884 "impl_name": "ssl", 00:21:22.884 "recv_buf_size": 4096, 00:21:22.884 "send_buf_size": 4096, 00:21:22.884 "enable_recv_pipe": true, 00:21:22.884 "enable_quickack": false, 00:21:22.884 "enable_placement_id": 0, 00:21:22.884 "enable_zerocopy_send_server": true, 00:21:22.884 "enable_zerocopy_send_client": false, 00:21:22.884 "zerocopy_threshold": 0, 00:21:22.884 "tls_version": 0, 00:21:22.884 "enable_ktls": false 00:21:22.884 } 00:21:22.884 } 00:21:22.884 ] 00:21:22.884 }, 00:21:22.884 { 00:21:22.884 "subsystem": "vmd", 00:21:22.884 "config": [] 00:21:22.884 }, 00:21:22.884 { 00:21:22.884 "subsystem": "accel", 00:21:22.884 "config": [ 00:21:22.884 { 00:21:22.884 "method": "accel_set_options", 00:21:22.884 "params": { 00:21:22.884 "small_cache_size": 128, 00:21:22.884 "large_cache_size": 16, 00:21:22.884 "task_count": 2048, 00:21:22.884 "sequence_count": 2048, 00:21:22.884 "buf_count": 2048 00:21:22.884 } 00:21:22.884 } 00:21:22.884 ] 00:21:22.884 }, 00:21:22.884 { 00:21:22.884 "subsystem": "bdev", 00:21:22.884 "config": [ 00:21:22.884 { 00:21:22.884 "method": "bdev_set_options", 00:21:22.884 "params": { 00:21:22.884 "bdev_io_pool_size": 65535, 00:21:22.884 "bdev_io_cache_size": 256, 00:21:22.884 "bdev_auto_examine": true, 00:21:22.884 "iobuf_small_cache_size": 128, 00:21:22.884 "iobuf_large_cache_size": 16 00:21:22.884 } 00:21:22.884 }, 00:21:22.884 { 00:21:22.884 "method": "bdev_raid_set_options", 00:21:22.884 "params": { 00:21:22.884 "process_window_size_kb": 1024 00:21:22.884 } 00:21:22.884 }, 00:21:22.884 { 00:21:22.884 "method": "bdev_iscsi_set_options", 00:21:22.884 "params": { 00:21:22.884 "timeout_sec": 30 00:21:22.884 } 00:21:22.884 }, 00:21:22.884 { 00:21:22.884 "method": "bdev_nvme_set_options", 00:21:22.884 "params": { 00:21:22.884 "action_on_timeout": "none", 00:21:22.884 "timeout_us": 0, 00:21:22.884 "timeout_admin_us": 0, 00:21:22.884 "keep_alive_timeout_ms": 10000, 00:21:22.884 "arbitration_burst": 0, 00:21:22.884 "low_priority_weight": 0, 00:21:22.884 "medium_priority_weight": 0, 00:21:22.884 "high_priority_weight": 0, 00:21:22.884 "nvme_adminq_poll_period_us": 10000, 00:21:22.884 "nvme_ioq_poll_period_us": 0, 00:21:22.884 "io_queue_requests": 0, 00:21:22.884 "delay_cmd_submit": true, 00:21:22.884 "transport_retry_count": 4, 00:21:22.884 "bdev_retry_count": 3, 00:21:22.884 "transport_ack_timeout": 0, 00:21:22.884 "ctrlr_loss_timeout_sec": 0, 00:21:22.884 "reconnect_delay_sec": 0, 00:21:22.884 "fast_io_fail_timeout_sec": 0, 00:21:22.884 "disable_auto_failback": false, 00:21:22.884 "generate_uuids": false, 00:21:22.884 "transport_tos": 0, 00:21:22.884 "nvme_error_stat": false, 00:21:22.884 "rdma_srq_size": 0, 00:21:22.884 "io_path_stat": false, 00:21:22.884 "allow_accel_sequence": false, 00:21:22.884 "rdma_max_cq_size": 0, 00:21:22.884 "rdma_cm_event_timeout_ms": 0, 00:21:22.884 "dhchap_digests": [ 00:21:22.884 "sha256", 00:21:22.884 "sha384", 00:21:22.884 "sha512" 00:21:22.884 ], 00:21:22.884 "dhchap_dhgroups": [ 00:21:22.884 "null", 00:21:22.884 "ffdhe2048", 00:21:22.884 "ffdhe3072", 00:21:22.884 "ffdhe4096", 00:21:22.884 "ffdhe6144", 00:21:22.884 "ffdhe8192" 00:21:22.884 ] 00:21:22.884 } 00:21:22.884 }, 00:21:22.884 { 00:21:22.884 "method": "bdev_nvme_set_hotplug", 00:21:22.884 "params": { 00:21:22.884 "period_us": 100000, 00:21:22.884 "enable": false 00:21:22.884 } 00:21:22.884 }, 00:21:22.884 { 00:21:22.884 "method": "bdev_malloc_create", 00:21:22.884 "params": { 00:21:22.884 "name": "malloc0", 00:21:22.884 "num_blocks": 8192, 00:21:22.884 "block_size": 4096, 00:21:22.884 "physical_block_size": 4096, 00:21:22.884 "uuid": "bf1aa020-2e5b-4163-9197-f8ba20b61f9b", 00:21:22.884 "optimal_io_boundary": 0 00:21:22.884 } 00:21:22.884 }, 00:21:22.884 { 00:21:22.884 "method": "bdev_wait_for_examine" 00:21:22.884 } 00:21:22.884 ] 00:21:22.884 }, 00:21:22.884 { 00:21:22.884 "subsystem": "nbd", 00:21:22.884 "config": [] 00:21:22.884 }, 00:21:22.884 { 00:21:22.884 "subsystem": "scheduler", 00:21:22.884 "config": [ 00:21:22.884 { 00:21:22.884 "method": "framework_set_scheduler", 00:21:22.884 "params": { 00:21:22.884 "name": "static" 00:21:22.884 } 00:21:22.884 } 00:21:22.884 ] 00:21:22.884 }, 00:21:22.884 { 00:21:22.884 "subsystem": "nvmf", 00:21:22.884 "config": [ 00:21:22.884 { 00:21:22.884 "method": "nvmf_set_config", 00:21:22.884 "params": { 00:21:22.884 "discovery_filter": "match_any", 00:21:22.884 "admin_cmd_passthru": { 00:21:22.884 "identify_ctrlr": false 00:21:22.884 } 00:21:22.884 } 00:21:22.884 }, 00:21:22.884 { 00:21:22.884 "method": "nvmf_set_max_subsystems", 00:21:22.884 "params": { 00:21:22.884 "max_subsystems": 1024 00:21:22.884 } 00:21:22.884 }, 00:21:22.884 { 00:21:22.884 "method": "nvmf_set_crdt", 00:21:22.884 "params": { 00:21:22.884 "crdt1": 0, 00:21:22.884 "crdt2": 0, 00:21:22.884 "crdt3": 0 00:21:22.884 } 00:21:22.884 }, 00:21:22.884 { 00:21:22.884 "method": "nvmf_create_transport", 00:21:22.884 "params": { 00:21:22.884 "trtype": "TCP", 00:21:22.884 "max_queue_depth": 128, 00:21:22.884 "max_io_qpairs_per_ctrlr": 127, 00:21:22.884 "in_capsule_data_size": 4096, 00:21:22.884 "max_io_size": 131072, 00:21:22.884 "io_unit_size": 131072, 00:21:22.884 "max_aq_depth": 128, 00:21:22.884 "num_shared_buffers": 511, 00:21:22.884 "buf_cache_size": 4294967295, 00:21:22.884 "dif_insert_or_strip": false, 00:21:22.884 "zcopy": false, 00:21:22.884 "c2h_success": false, 00:21:22.884 "sock_priority": 0, 00:21:22.884 "abort_timeout_sec": 1, 00:21:22.884 "ack_timeout": 0, 00:21:22.884 "data_wr_pool_size": 0 00:21:22.884 } 00:21:22.884 }, 00:21:22.884 { 00:21:22.884 "method": "nvmf_create_subsystem", 00:21:22.884 "params": { 00:21:22.884 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.884 "allow_any_host": false, 00:21:22.884 "serial_number": "00000000000000000000", 00:21:22.884 "model_number": "SPDK bdev Controller", 00:21:22.884 "max_namespaces": 32, 00:21:22.884 "min_cntlid": 1, 00:21:22.884 "max_cntlid": 65519, 00:21:22.884 "ana_reporting": false 00:21:22.884 } 00:21:22.884 }, 00:21:22.884 { 00:21:22.884 "method": "nvmf_subsystem_add_host", 00:21:22.884 "params": { 00:21:22.884 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.884 "host": "nqn.2016-06.io.spdk:host1", 00:21:22.884 "psk": "key0" 00:21:22.884 } 00:21:22.884 }, 00:21:22.884 { 00:21:22.884 "method": "nvmf_subsystem_add_ns", 00:21:22.884 "params": { 00:21:22.884 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.884 "namespace": { 00:21:22.884 "nsid": 1, 00:21:22.884 "bdev_name": "malloc0", 00:21:22.884 "nguid": "BF1AA0202E5B41639197F8BA20B61F9B", 00:21:22.884 "uuid": "bf1aa020-2e5b-4163-9197-f8ba20b61f9b", 00:21:22.884 "no_auto_visible": false 00:21:22.884 } 00:21:22.884 } 00:21:22.884 }, 00:21:22.884 { 00:21:22.884 "method": "nvmf_subsystem_add_listener", 00:21:22.884 "params": { 00:21:22.884 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.884 "listen_address": { 00:21:22.884 "trtype": "TCP", 00:21:22.884 "adrfam": "IPv4", 00:21:22.884 "traddr": "10.0.0.2", 00:21:22.884 "trsvcid": "4420" 00:21:22.884 }, 00:21:22.884 "secure_channel": true 00:21:22.885 } 00:21:22.885 } 00:21:22.885 ] 00:21:22.885 } 00:21:22.885 ] 00:21:22.885 }' 00:21:22.885 13:39:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:22.885 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:22.885 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.885 13:39:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=86798 00:21:22.885 13:39:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:22.885 13:39:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 86798 00:21:22.885 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 86798 ']' 00:21:22.885 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.885 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:22.885 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.885 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:22.885 13:39:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.885 [2024-05-15 13:39:35.930926] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:21:22.885 [2024-05-15 13:39:35.931541] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.142 [2024-05-15 13:39:36.054591] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:23.142 [2024-05-15 13:39:36.068460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.142 [2024-05-15 13:39:36.142989] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.142 [2024-05-15 13:39:36.143405] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.142 [2024-05-15 13:39:36.143585] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.142 [2024-05-15 13:39:36.143831] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.142 [2024-05-15 13:39:36.143887] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.142 [2024-05-15 13:39:36.144117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.400 [2024-05-15 13:39:36.358726] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.400 [2024-05-15 13:39:36.390650] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:23.400 [2024-05-15 13:39:36.391069] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:23.400 [2024-05-15 13:39:36.391365] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.966 13:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:23.966 13:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:23.966 13:39:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:23.966 13:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:23.966 13:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:23.966 13:39:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.966 13:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=86831 00:21:23.966 13:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 86831 /var/tmp/bdevperf.sock 00:21:23.966 13:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:23.966 13:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 86831 ']' 00:21:23.966 13:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:23.966 13:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:23.966 13:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:23.966 13:39:36 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:21:23.966 "subsystems": [ 00:21:23.966 { 00:21:23.966 "subsystem": "keyring", 00:21:23.966 "config": [ 00:21:23.966 { 00:21:23.966 "method": "keyring_file_add_key", 00:21:23.966 "params": { 00:21:23.966 "name": "key0", 00:21:23.966 "path": "/tmp/tmp.tde0PGHzXH" 00:21:23.966 } 00:21:23.966 } 00:21:23.966 ] 00:21:23.966 }, 00:21:23.966 { 00:21:23.966 "subsystem": "iobuf", 00:21:23.966 "config": [ 00:21:23.966 { 00:21:23.966 "method": "iobuf_set_options", 00:21:23.966 "params": { 00:21:23.966 "small_pool_count": 8192, 00:21:23.966 "large_pool_count": 1024, 00:21:23.966 "small_bufsize": 8192, 00:21:23.966 "large_bufsize": 135168 00:21:23.966 } 00:21:23.966 } 00:21:23.966 ] 00:21:23.966 }, 00:21:23.966 { 00:21:23.966 "subsystem": "sock", 00:21:23.966 "config": [ 00:21:23.966 { 00:21:23.966 "method": "sock_impl_set_options", 00:21:23.966 "params": { 00:21:23.966 "impl_name": "uring", 00:21:23.966 "recv_buf_size": 2097152, 00:21:23.966 "send_buf_size": 2097152, 00:21:23.966 "enable_recv_pipe": true, 00:21:23.966 "enable_quickack": false, 00:21:23.966 "enable_placement_id": 0, 00:21:23.966 "enable_zerocopy_send_server": false, 00:21:23.966 "enable_zerocopy_send_client": false, 00:21:23.966 "zerocopy_threshold": 0, 00:21:23.966 "tls_version": 0, 00:21:23.966 "enable_ktls": false 00:21:23.966 } 00:21:23.966 }, 00:21:23.966 { 00:21:23.966 "method": "sock_impl_set_options", 00:21:23.966 "params": { 00:21:23.966 "impl_name": "posix", 00:21:23.966 "recv_buf_size": 2097152, 00:21:23.966 "send_buf_size": 2097152, 00:21:23.966 "enable_recv_pipe": true, 00:21:23.966 "enable_quickack": false, 00:21:23.966 "enable_placement_id": 0, 00:21:23.966 "enable_zerocopy_send_server": true, 00:21:23.966 "enable_zerocopy_send_client": false, 00:21:23.966 "zerocopy_threshold": 0, 00:21:23.966 "tls_version": 0, 00:21:23.966 "enable_ktls": false 00:21:23.966 } 00:21:23.966 }, 00:21:23.966 { 00:21:23.966 "method": "sock_impl_set_options", 00:21:23.966 "params": { 00:21:23.966 "impl_name": "ssl", 00:21:23.966 "recv_buf_size": 4096, 00:21:23.966 "send_buf_size": 4096, 00:21:23.966 "enable_recv_pipe": true, 00:21:23.966 "enable_quickack": false, 00:21:23.966 "enable_placement_id": 0, 00:21:23.966 "enable_zerocopy_send_server": true, 00:21:23.966 "enable_zerocopy_send_client": false, 00:21:23.966 "zerocopy_threshold": 0, 00:21:23.966 "tls_version": 0, 00:21:23.966 "enable_ktls": false 00:21:23.966 } 00:21:23.966 } 00:21:23.966 ] 00:21:23.966 }, 00:21:23.966 { 00:21:23.966 "subsystem": "vmd", 00:21:23.966 "config": [] 00:21:23.966 }, 00:21:23.966 { 00:21:23.966 "subsystem": "accel", 00:21:23.966 "config": [ 00:21:23.966 { 00:21:23.966 "method": "accel_set_options", 00:21:23.966 "params": { 00:21:23.966 "small_cache_size": 128, 00:21:23.966 "large_cache_size": 16, 00:21:23.966 "task_count": 2048, 00:21:23.966 "sequence_count": 2048, 00:21:23.966 "buf_count": 2048 00:21:23.966 } 00:21:23.966 } 00:21:23.966 ] 00:21:23.966 }, 00:21:23.966 { 00:21:23.966 "subsystem": "bdev", 00:21:23.966 "config": [ 00:21:23.966 { 00:21:23.966 "method": "bdev_set_options", 00:21:23.966 "params": { 00:21:23.966 "bdev_io_pool_size": 65535, 00:21:23.966 "bdev_io_cache_size": 256, 00:21:23.966 "bdev_auto_examine": true, 00:21:23.966 "iobuf_small_cache_size": 128, 00:21:23.966 "iobuf_large_cache_size": 16 00:21:23.966 } 00:21:23.966 }, 00:21:23.966 { 00:21:23.966 "method": "bdev_raid_set_options", 00:21:23.966 "params": { 00:21:23.966 "process_window_size_kb": 1024 00:21:23.966 } 00:21:23.966 }, 00:21:23.966 { 00:21:23.966 "method": "bdev_iscsi_set_options", 00:21:23.966 "params": { 00:21:23.966 "timeout_sec": 30 00:21:23.966 } 00:21:23.966 }, 00:21:23.966 { 00:21:23.966 "method": "bdev_nvme_set_options", 00:21:23.966 "params": { 00:21:23.966 "action_on_timeout": "none", 00:21:23.966 "timeout_us": 0, 00:21:23.966 "timeout_admin_us": 0, 00:21:23.966 "keep_alive_timeout_ms": 10000, 00:21:23.966 "arbitration_burst": 0, 00:21:23.966 "low_priority_weight": 0, 00:21:23.966 "medium_priority_weight": 0, 00:21:23.966 "high_priority_weight": 0, 00:21:23.966 "nvme_adminq_poll_period_us": 10000, 00:21:23.966 "nvme_ioq_poll_period_us": 0, 00:21:23.966 "io_queue_requests": 512, 00:21:23.966 "delay_cmd_submit": true, 00:21:23.966 "transport_retry_count": 4, 00:21:23.966 "bdev_retry_count": 3, 00:21:23.966 "transport_ack_timeout": 0, 00:21:23.966 "ctrlr_loss_timeout_sec": 0, 00:21:23.966 "reconnect_delay_sec": 0, 00:21:23.966 "fast_io_fail_timeout_sec": 0, 00:21:23.966 "disable_auto_failback": false, 00:21:23.966 13:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:23.966 "generate_uuids": false, 00:21:23.966 "transport_tos": 0, 00:21:23.966 "nvme_error_stat": false, 00:21:23.966 "rdma_srq_size": 0, 00:21:23.966 "io_path_stat": false, 00:21:23.966 "allow_accel_sequence": false, 00:21:23.966 "rdma_max_cq_size": 0, 00:21:23.966 "rdma_cm_event_timeout_ms": 0, 00:21:23.966 "dhchap_digests": [ 00:21:23.966 "sha256", 00:21:23.966 "sha384", 00:21:23.966 "sha512" 00:21:23.966 ], 00:21:23.966 "dhchap_dhgroups": [ 00:21:23.966 "null", 00:21:23.966 "ffdhe2048", 00:21:23.966 "ffdhe3072", 00:21:23.966 "ffdhe4096", 00:21:23.966 "ffdhe6144", 00:21:23.966 "ffdhe8192" 00:21:23.966 ] 00:21:23.966 } 00:21:23.966 }, 00:21:23.966 { 00:21:23.966 "method": "bdev_nvme_attach_controller", 00:21:23.966 "params": { 00:21:23.966 "name": "nvme0", 00:21:23.966 "trtype": "TCP", 00:21:23.966 "adrfam": "IPv4", 00:21:23.966 "traddr": "10.0.0.2", 00:21:23.966 "trsvcid": "4420", 00:21:23.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.966 "prchk_reftag": false, 00:21:23.966 "prchk_guard": false, 00:21:23.966 "ctrlr_loss_timeout_sec": 0, 00:21:23.966 "reconnect_delay_sec": 0, 00:21:23.966 "fast_io_fail_timeout_sec": 0, 00:21:23.966 "psk": "key0", 00:21:23.966 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:23.966 "hdgst": false, 00:21:23.966 "ddgst": false 00:21:23.966 } 00:21:23.966 }, 00:21:23.966 { 00:21:23.966 "method": "bdev_nvme_set_hotplug", 00:21:23.966 "params": { 00:21:23.966 "period_us": 100000, 00:21:23.966 "enable": false 00:21:23.966 } 00:21:23.966 }, 00:21:23.967 { 00:21:23.967 "method": "bdev_enable_histogram", 00:21:23.967 "params": { 00:21:23.967 "name": "nvme0n1", 00:21:23.967 "enable": true 00:21:23.967 } 00:21:23.967 }, 00:21:23.967 { 00:21:23.967 "method": "bdev_wait_for_examine" 00:21:23.967 } 00:21:23.967 ] 00:21:23.967 }, 00:21:23.967 { 00:21:23.967 "subsystem": "nbd", 00:21:23.967 "config": [] 00:21:23.967 } 00:21:23.967 ] 00:21:23.967 }' 00:21:23.967 13:39:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.967 [2024-05-15 13:39:36.964042] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:21:23.967 [2024-05-15 13:39:36.964644] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86831 ] 00:21:24.224 [2024-05-15 13:39:37.095927] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:24.224 [2024-05-15 13:39:37.107529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.224 [2024-05-15 13:39:37.162232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.224 [2024-05-15 13:39:37.319616] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:25.159 13:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:25.159 13:39:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:21:25.159 13:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:25.159 13:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:25.725 13:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.725 13:39:38 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:25.725 Running I/O for 1 seconds... 00:21:26.659 00:21:26.659 Latency(us) 00:21:26.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.659 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:26.659 Verification LBA range: start 0x0 length 0x2000 00:21:26.659 nvme0n1 : 1.01 4717.56 18.43 0.00 0.00 26899.43 5398.92 20971.52 00:21:26.659 =================================================================================================================== 00:21:26.659 Total : 4717.56 18.43 0.00 0.00 26899.43 5398.92 20971.52 00:21:26.659 0 00:21:26.659 13:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:26.659 13:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:21:26.659 13:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:26.659 13:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:21:26.659 13:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:21:26.659 13:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:21:26.659 13:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:26.957 13:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:21:26.957 13:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:21:26.957 13:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:21:26.957 13:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:26.957 nvmf_trace.0 00:21:26.957 13:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:21:26.957 13:39:39 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 86831 00:21:26.957 13:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 86831 ']' 00:21:26.957 13:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 86831 00:21:26.957 13:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:26.957 13:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:26.957 13:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86831 00:21:26.957 13:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:26.957 13:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:26.957 13:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86831' 00:21:26.957 killing process with pid 86831 00:21:26.957 13:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 86831 00:21:26.957 Received shutdown signal, test time was about 1.000000 seconds 00:21:26.957 00:21:26.957 Latency(us) 00:21:26.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.957 =================================================================================================================== 00:21:26.957 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:26.957 13:39:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 86831 00:21:27.215 13:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:27.215 13:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:27.215 13:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:27.215 13:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:27.215 13:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:27.215 13:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:27.215 13:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:27.215 rmmod nvme_tcp 00:21:27.215 rmmod nvme_fabrics 00:21:27.215 13:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:27.215 13:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:27.215 13:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:27.215 13:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 86798 ']' 00:21:27.216 13:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 86798 00:21:27.216 13:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 86798 ']' 00:21:27.216 13:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 86798 00:21:27.216 13:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:21:27.216 13:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:27.216 13:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86798 00:21:27.216 13:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:27.216 13:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:27.216 13:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86798' 00:21:27.216 killing process with pid 86798 00:21:27.216 13:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 86798 00:21:27.216 [2024-05-15 13:39:40.188083] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:27.216 13:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 86798 00:21:27.473 13:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:27.473 13:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:27.473 13:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:27.473 13:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:27.473 13:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:27.473 13:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.473 13:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.473 13:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.473 13:39:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:27.473 13:39:40 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.C9S4ENbAmd /tmp/tmp.i76rW5MDQJ /tmp/tmp.tde0PGHzXH 00:21:27.473 ************************************ 00:21:27.473 END TEST nvmf_tls 00:21:27.473 ************************************ 00:21:27.473 00:21:27.473 real 1m22.507s 00:21:27.473 user 2m14.501s 00:21:27.473 sys 0m26.032s 00:21:27.473 13:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:27.473 13:39:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.473 13:39:40 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:27.473 13:39:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:27.473 13:39:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:27.473 13:39:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:27.473 ************************************ 00:21:27.473 START TEST nvmf_fips 00:21:27.473 ************************************ 00:21:27.473 13:39:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:27.473 * Looking for test storage... 00:21:27.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:21:27.473 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:27.473 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:27.473 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.473 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.473 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.473 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.473 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.473 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.473 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.473 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.473 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:27.732 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:27.733 Error setting digest 00:21:27.733 00C2D7CC7B7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:27.733 00C2D7CC7B7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:27.733 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:27.991 Cannot find device "nvmf_tgt_br" 00:21:27.991 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:21:27.991 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:27.991 Cannot find device "nvmf_tgt_br2" 00:21:27.991 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:21:27.991 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:27.991 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:27.991 Cannot find device "nvmf_tgt_br" 00:21:27.991 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:21:27.991 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:27.991 Cannot find device "nvmf_tgt_br2" 00:21:27.991 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:21:27.991 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:27.991 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:27.991 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:27.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:27.991 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:21:27.991 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:27.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:27.991 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:21:27.991 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:27.991 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:27.991 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:27.991 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:27.991 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:27.991 13:39:40 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:27.991 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:27.991 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:27.991 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:27.991 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:27.991 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:27.991 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:27.991 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:27.991 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:27.991 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:27.991 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:27.991 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:28.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:28.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:21:28.250 00:21:28.250 --- 10.0.0.2 ping statistics --- 00:21:28.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.250 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:28.250 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:28.250 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:21:28.250 00:21:28.250 --- 10.0.0.3 ping statistics --- 00:21:28.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.250 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:28.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:28.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:21:28.250 00:21:28.250 --- 10.0.0.1 ping statistics --- 00:21:28.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.250 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=87102 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 87102 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 87102 ']' 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:28.250 13:39:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:28.250 [2024-05-15 13:39:41.292577] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:21:28.250 [2024-05-15 13:39:41.292944] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.508 [2024-05-15 13:39:41.418554] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:28.508 [2024-05-15 13:39:41.437700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.508 [2024-05-15 13:39:41.490747] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.508 [2024-05-15 13:39:41.491033] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.508 [2024-05-15 13:39:41.491216] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.508 [2024-05-15 13:39:41.491416] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.508 [2024-05-15 13:39:41.491460] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.508 [2024-05-15 13:39:41.491529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.442 13:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:29.442 13:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:21:29.442 13:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:29.442 13:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:29.442 13:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:29.442 13:39:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:29.442 13:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:29.442 13:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:29.442 13:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:21:29.442 13:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:29.442 13:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:21:29.442 13:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:21:29.442 13:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:21:29.442 13:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:29.442 [2024-05-15 13:39:42.501458] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:29.442 [2024-05-15 13:39:42.517389] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:29.442 [2024-05-15 13:39:42.517763] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:29.442 [2024-05-15 13:39:42.518056] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:29.700 [2024-05-15 13:39:42.547129] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:29.700 malloc0 00:21:29.700 13:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:29.700 13:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=87136 00:21:29.700 13:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 87136 /var/tmp/bdevperf.sock 00:21:29.700 13:39:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:29.700 13:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 87136 ']' 00:21:29.700 13:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:29.700 13:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:29.700 13:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:29.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:29.700 13:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:29.700 13:39:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:29.700 [2024-05-15 13:39:42.647455] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:21:29.700 [2024-05-15 13:39:42.647785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87136 ] 00:21:29.700 [2024-05-15 13:39:42.772403] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:29.700 [2024-05-15 13:39:42.790144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.957 [2024-05-15 13:39:42.869586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:30.524 13:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:30.524 13:39:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:21:30.524 13:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:21:30.782 [2024-05-15 13:39:43.715360] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:30.782 [2024-05-15 13:39:43.716252] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:30.782 TLSTESTn1 00:21:30.782 13:39:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:31.040 Running I/O for 10 seconds... 00:21:41.011 00:21:41.011 Latency(us) 00:21:41.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.011 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:41.011 Verification LBA range: start 0x0 length 0x2000 00:21:41.011 TLSTESTn1 : 10.01 4402.35 17.20 0.00 0.00 29028.55 2808.69 31457.28 00:21:41.011 =================================================================================================================== 00:21:41.011 Total : 4402.35 17.20 0.00 0.00 29028.55 2808.69 31457.28 00:21:41.011 0 00:21:41.011 13:39:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:41.011 13:39:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:41.011 13:39:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:21:41.011 13:39:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:21:41.011 13:39:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:21:41.011 13:39:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:41.011 13:39:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:21:41.011 13:39:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:21:41.011 13:39:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:21:41.011 13:39:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:41.011 nvmf_trace.0 00:21:41.011 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:21:41.011 13:39:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 87136 00:21:41.011 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 87136 ']' 00:21:41.011 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 87136 00:21:41.011 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:21:41.011 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:41.011 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87136 00:21:41.011 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:41.011 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:41.011 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87136' 00:21:41.011 killing process with pid 87136 00:21:41.011 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 87136 00:21:41.011 Received shutdown signal, test time was about 10.000000 seconds 00:21:41.011 00:21:41.011 Latency(us) 00:21:41.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.011 =================================================================================================================== 00:21:41.011 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:41.011 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 87136 00:21:41.011 [2024-05-15 13:39:54.058059] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:41.268 13:39:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:41.268 13:39:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:41.268 13:39:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:41.268 13:39:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:41.268 13:39:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:41.268 13:39:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:41.268 13:39:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:41.268 rmmod nvme_tcp 00:21:41.268 rmmod nvme_fabrics 00:21:41.268 13:39:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:41.268 13:39:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:41.268 13:39:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:41.268 13:39:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 87102 ']' 00:21:41.268 13:39:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 87102 00:21:41.269 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 87102 ']' 00:21:41.269 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 87102 00:21:41.269 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:21:41.269 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:41.269 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87102 00:21:41.269 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:41.269 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:41.269 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87102' 00:21:41.269 killing process with pid 87102 00:21:41.269 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 87102 00:21:41.269 [2024-05-15 13:39:54.349566] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]addres 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 87102 00:21:41.269 s.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:41.269 [2024-05-15 13:39:54.349851] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:41.526 13:39:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:41.526 13:39:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:41.526 13:39:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:41.526 13:39:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:41.526 13:39:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:41.526 13:39:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.526 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.526 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.526 13:39:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:41.526 13:39:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:21:41.526 ************************************ 00:21:41.526 END TEST nvmf_fips 00:21:41.526 ************************************ 00:21:41.526 00:21:41.526 real 0m14.103s 00:21:41.526 user 0m20.036s 00:21:41.526 sys 0m5.241s 00:21:41.526 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:41.526 13:39:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:41.784 13:39:54 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:21:41.784 13:39:54 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:41.784 13:39:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:41.784 13:39:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:41.784 13:39:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:41.784 ************************************ 00:21:41.784 START TEST nvmf_fuzz 00:21:41.784 ************************************ 00:21:41.784 13:39:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:41.784 * Looking for test storage... 00:21:41.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:41.784 13:39:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:41.784 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:41.785 Cannot find device "nvmf_tgt_br" 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # true 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:41.785 Cannot find device "nvmf_tgt_br2" 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # true 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:41.785 Cannot find device "nvmf_tgt_br" 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # true 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:41.785 Cannot find device "nvmf_tgt_br2" 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # true 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:41.785 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:42.043 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:42.043 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:42.043 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:21:42.043 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:42.043 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:42.043 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:21:42.043 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:42.043 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:42.043 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:42.043 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:42.043 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:42.043 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:42.043 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:42.043 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:42.043 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:42.043 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:42.043 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:42.043 13:39:54 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:42.043 13:39:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:42.043 13:39:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:42.043 13:39:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:42.043 13:39:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:42.043 13:39:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:42.043 13:39:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:42.043 13:39:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:42.043 13:39:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:42.043 13:39:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:42.043 13:39:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:42.043 13:39:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:42.043 13:39:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:42.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:21:42.044 00:21:42.044 --- 10.0.0.2 ping statistics --- 00:21:42.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.044 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:21:42.044 13:39:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:42.044 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:42.044 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:21:42.044 00:21:42.044 --- 10.0.0.3 ping statistics --- 00:21:42.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.044 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:21:42.044 13:39:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:42.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:21:42.044 00:21:42.044 --- 10.0.0.1 ping statistics --- 00:21:42.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.044 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:21:42.044 13:39:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.044 13:39:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 00:21:42.044 13:39:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:42.044 13:39:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.044 13:39:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:42.044 13:39:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:42.044 13:39:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.044 13:39:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:42.044 13:39:55 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:42.044 13:39:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:42.044 13:39:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=87456 00:21:42.044 13:39:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:42.044 13:39:55 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 87456 00:21:42.044 13:39:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 87456 ']' 00:21:42.044 13:39:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.044 13:39:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:42.044 13:39:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.044 13:39:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:42.044 13:39:55 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:43.418 13:39:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:43.418 13:39:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:21:43.418 13:39:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:43.418 13:39:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.418 13:39:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:43.418 13:39:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.418 13:39:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:43.418 13:39:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.418 13:39:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:43.418 Malloc0 00:21:43.418 13:39:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.418 13:39:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:43.418 13:39:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.418 13:39:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:43.418 13:39:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.418 13:39:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:43.418 13:39:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.418 13:39:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:43.418 13:39:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.418 13:39:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:43.418 13:39:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.418 13:39:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:43.418 13:39:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.418 13:39:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:21:43.418 13:39:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:21:43.676 Shutting down the fuzz application 00:21:43.676 13:39:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:43.934 Shutting down the fuzz application 00:21:43.934 13:39:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:43.934 13:39:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.934 13:39:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:43.934 13:39:56 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.934 13:39:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:43.934 13:39:56 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:43.934 13:39:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:43.934 13:39:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:21:43.934 13:39:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:43.934 13:39:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:21:43.934 13:39:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:43.934 13:39:56 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:43.934 rmmod nvme_tcp 00:21:43.934 rmmod nvme_fabrics 00:21:43.934 13:39:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:43.934 13:39:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:21:43.934 13:39:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:21:43.934 13:39:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 87456 ']' 00:21:43.934 13:39:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 87456 00:21:43.934 13:39:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 87456 ']' 00:21:43.934 13:39:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 87456 00:21:43.934 13:39:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:21:43.934 13:39:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:43.934 13:39:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87456 00:21:44.192 13:39:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:44.192 13:39:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:44.192 13:39:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87456' 00:21:44.192 killing process with pid 87456 00:21:44.192 13:39:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 87456 00:21:44.192 13:39:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 87456 00:21:44.192 13:39:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:44.192 13:39:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:44.192 13:39:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:44.192 13:39:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:44.192 13:39:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:44.192 13:39:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.192 13:39:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.192 13:39:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.451 13:39:57 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:44.451 13:39:57 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:21:44.451 00:21:44.451 real 0m2.665s 00:21:44.451 user 0m2.735s 00:21:44.451 sys 0m0.696s 00:21:44.451 13:39:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:44.451 13:39:57 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:44.451 ************************************ 00:21:44.451 END TEST nvmf_fuzz 00:21:44.451 ************************************ 00:21:44.451 13:39:57 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:44.451 13:39:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:44.451 13:39:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:44.451 13:39:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:44.451 ************************************ 00:21:44.451 START TEST nvmf_multiconnection 00:21:44.451 ************************************ 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:44.451 * Looking for test storage... 00:21:44.451 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.451 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:44.452 Cannot find device "nvmf_tgt_br" 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:44.452 Cannot find device "nvmf_tgt_br2" 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:44.452 Cannot find device "nvmf_tgt_br" 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:44.452 Cannot find device "nvmf_tgt_br2" 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 00:21:44.452 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:44.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:44.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:44.711 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:44.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:21:44.969 00:21:44.969 --- 10.0.0.2 ping statistics --- 00:21:44.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.969 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:44.969 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:44.969 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:21:44.969 00:21:44.969 --- 10.0.0.3 ping statistics --- 00:21:44.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.969 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:44.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:21:44.969 00:21:44.969 --- 10.0.0.1 ping statistics --- 00:21:44.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.969 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=87646 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 87646 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 87646 ']' 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:44.969 13:39:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:44.969 [2024-05-15 13:39:57.902280] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:21:44.969 [2024-05-15 13:39:57.903111] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.969 [2024-05-15 13:39:58.047588] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:44.969 [2024-05-15 13:39:58.063637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:45.227 [2024-05-15 13:39:58.143408] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.227 [2024-05-15 13:39:58.143734] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.227 [2024-05-15 13:39:58.144001] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.227 [2024-05-15 13:39:58.144162] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.227 [2024-05-15 13:39:58.144300] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.227 [2024-05-15 13:39:58.144454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.227 [2024-05-15 13:39:58.144561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.227 [2024-05-15 13:39:58.145285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:45.227 [2024-05-15 13:39:58.145302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.160 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:46.160 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:21:46.160 13:39:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:46.160 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.161 [2024-05-15 13:39:59.062902] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.161 Malloc1 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.161 [2024-05-15 13:39:59.135038] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:46.161 [2024-05-15 13:39:59.135704] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.161 Malloc2 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.161 Malloc3 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.161 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.420 Malloc4 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.420 Malloc5 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.420 Malloc6 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.420 Malloc7 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.420 Malloc8 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.420 Malloc9 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.420 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.680 Malloc10 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:46.680 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.681 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.681 Malloc11 00:21:46.681 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.681 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:46.681 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.681 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.681 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.681 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:46.681 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.681 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.681 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.681 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:21:46.681 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.681 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:46.681 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.681 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:21:46.681 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.681 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:46.681 13:39:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:21:46.681 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:46.681 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:46.681 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:46.681 13:39:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:49.209 13:40:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:49.209 13:40:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:49.209 13:40:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:21:49.209 13:40:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:49.209 13:40:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:49.209 13:40:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:49.209 13:40:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.209 13:40:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:21:49.209 13:40:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:21:49.209 13:40:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:49.209 13:40:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:49.209 13:40:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:49.209 13:40:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:51.104 13:40:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:51.104 13:40:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:51.104 13:40:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:21:51.104 13:40:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:51.104 13:40:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:51.104 13:40:03 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:51.104 13:40:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:51.104 13:40:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:21:51.104 13:40:04 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:21:51.104 13:40:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:51.104 13:40:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:51.104 13:40:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:51.104 13:40:04 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:53.002 13:40:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:53.002 13:40:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:53.003 13:40:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:21:53.003 13:40:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:53.003 13:40:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:53.003 13:40:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:53.003 13:40:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:53.003 13:40:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:21:53.260 13:40:06 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:21:53.260 13:40:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:53.260 13:40:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:53.260 13:40:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:53.260 13:40:06 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:55.157 13:40:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:55.157 13:40:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:55.157 13:40:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:21:55.157 13:40:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:55.157 13:40:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:55.157 13:40:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:55.157 13:40:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:55.157 13:40:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:21:55.414 13:40:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:21:55.414 13:40:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:55.414 13:40:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:55.414 13:40:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:55.414 13:40:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:57.310 13:40:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:57.310 13:40:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:57.310 13:40:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:21:57.568 13:40:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:57.568 13:40:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:57.568 13:40:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:57.568 13:40:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:57.568 13:40:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:21:57.568 13:40:10 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:21:57.568 13:40:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:57.568 13:40:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:57.568 13:40:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:57.568 13:40:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:21:59.467 13:40:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:59.467 13:40:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:59.467 13:40:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:21:59.725 13:40:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:59.725 13:40:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:59.725 13:40:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:21:59.725 13:40:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:59.725 13:40:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:21:59.725 13:40:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:21:59.725 13:40:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:21:59.725 13:40:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:59.725 13:40:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:59.725 13:40:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:22:02.253 13:40:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:22:02.253 13:40:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:22:02.253 13:40:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:22:02.253 13:40:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:22:02.254 13:40:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:22:02.254 13:40:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:22:02.254 13:40:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:02.254 13:40:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:22:02.254 13:40:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:02.254 13:40:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:22:02.254 13:40:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:22:02.254 13:40:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:22:02.254 13:40:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:22:04.152 13:40:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:22:04.152 13:40:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:22:04.152 13:40:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:22:04.152 13:40:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:22:04.152 13:40:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:22:04.152 13:40:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:22:04.152 13:40:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.152 13:40:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:22:04.152 13:40:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:04.152 13:40:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:22:04.152 13:40:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:22:04.152 13:40:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:22:04.152 13:40:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:22:06.128 13:40:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:22:06.128 13:40:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:22:06.128 13:40:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:22:06.128 13:40:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:22:06.128 13:40:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:22:06.128 13:40:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:22:06.128 13:40:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.128 13:40:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:22:06.128 13:40:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:06.128 13:40:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:22:06.128 13:40:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:22:06.128 13:40:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:22:06.128 13:40:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:22:08.656 13:40:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:22:08.656 13:40:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:22:08.656 13:40:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:22:08.656 13:40:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:22:08.656 13:40:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:22:08.656 13:40:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:22:08.656 13:40:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:08.656 13:40:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:22:08.656 13:40:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:08.656 13:40:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:22:08.656 13:40:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:22:08.656 13:40:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:22:08.656 13:40:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:22:10.573 13:40:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:22:10.573 13:40:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:22:10.573 13:40:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:22:10.573 13:40:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:22:10.573 13:40:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:22:10.573 13:40:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:22:10.573 13:40:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:10.573 [global] 00:22:10.573 thread=1 00:22:10.573 invalidate=1 00:22:10.573 rw=read 00:22:10.573 time_based=1 00:22:10.573 runtime=10 00:22:10.573 ioengine=libaio 00:22:10.573 direct=1 00:22:10.573 bs=262144 00:22:10.573 iodepth=64 00:22:10.573 norandommap=1 00:22:10.573 numjobs=1 00:22:10.573 00:22:10.573 [job0] 00:22:10.573 filename=/dev/nvme0n1 00:22:10.573 [job1] 00:22:10.573 filename=/dev/nvme10n1 00:22:10.573 [job2] 00:22:10.573 filename=/dev/nvme1n1 00:22:10.573 [job3] 00:22:10.573 filename=/dev/nvme2n1 00:22:10.573 [job4] 00:22:10.573 filename=/dev/nvme3n1 00:22:10.573 [job5] 00:22:10.573 filename=/dev/nvme4n1 00:22:10.573 [job6] 00:22:10.573 filename=/dev/nvme5n1 00:22:10.573 [job7] 00:22:10.573 filename=/dev/nvme6n1 00:22:10.573 [job8] 00:22:10.573 filename=/dev/nvme7n1 00:22:10.573 [job9] 00:22:10.573 filename=/dev/nvme8n1 00:22:10.573 [job10] 00:22:10.573 filename=/dev/nvme9n1 00:22:10.573 Could not set queue depth (nvme0n1) 00:22:10.573 Could not set queue depth (nvme10n1) 00:22:10.573 Could not set queue depth (nvme1n1) 00:22:10.573 Could not set queue depth (nvme2n1) 00:22:10.573 Could not set queue depth (nvme3n1) 00:22:10.573 Could not set queue depth (nvme4n1) 00:22:10.573 Could not set queue depth (nvme5n1) 00:22:10.573 Could not set queue depth (nvme6n1) 00:22:10.573 Could not set queue depth (nvme7n1) 00:22:10.573 Could not set queue depth (nvme8n1) 00:22:10.573 Could not set queue depth (nvme9n1) 00:22:10.832 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:10.832 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:10.832 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:10.832 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:10.832 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:10.832 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:10.832 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:10.832 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:10.832 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:10.832 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:10.832 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:10.832 fio-3.35 00:22:10.832 Starting 11 threads 00:22:23.025 00:22:23.025 job0: (groupid=0, jobs=1): err= 0: pid=88104: Wed May 15 13:40:34 2024 00:22:23.025 read: IOPS=645, BW=161MiB/s (169MB/s)(1631MiB/10107msec) 00:22:23.025 slat (usec): min=18, max=28352, avg=1528.37, stdev=3364.51 00:22:23.025 clat (msec): min=31, max=224, avg=97.47, stdev=21.60 00:22:23.025 lat (msec): min=32, max=237, avg=99.00, stdev=21.94 00:22:23.025 clat percentiles (msec): 00:22:23.025 | 1.00th=[ 55], 5.00th=[ 61], 10.00th=[ 67], 20.00th=[ 88], 00:22:23.025 | 30.00th=[ 91], 40.00th=[ 93], 50.00th=[ 95], 60.00th=[ 97], 00:22:23.025 | 70.00th=[ 101], 80.00th=[ 107], 90.00th=[ 131], 95.00th=[ 140], 00:22:23.025 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 215], 99.95th=[ 224], 00:22:23.025 | 99.99th=[ 226] 00:22:23.025 bw ( KiB/s): min=114688, max=243712, per=8.52%, avg=165371.35, stdev=32296.70, samples=20 00:22:23.025 iops : min= 448, max= 952, avg=645.90, stdev=126.21, samples=20 00:22:23.025 lat (msec) : 50=0.43%, 100=69.06%, 250=30.51% 00:22:23.025 cpu : usr=0.31%, sys=2.36%, ctx=1966, majf=0, minf=4097 00:22:23.025 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:22:23.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.025 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:23.025 issued rwts: total=6525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.025 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:23.025 job1: (groupid=0, jobs=1): err= 0: pid=88105: Wed May 15 13:40:34 2024 00:22:23.025 read: IOPS=964, BW=241MiB/s (253MB/s)(2436MiB/10102msec) 00:22:23.025 slat (usec): min=17, max=137173, avg=1021.59, stdev=2927.34 00:22:23.025 clat (msec): min=10, max=241, avg=65.27, stdev=26.68 00:22:23.026 lat (msec): min=10, max=272, avg=66.29, stdev=27.02 00:22:23.026 clat percentiles (msec): 00:22:23.026 | 1.00th=[ 30], 5.00th=[ 32], 10.00th=[ 34], 20.00th=[ 51], 00:22:23.026 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 65], 60.00th=[ 67], 00:22:23.026 | 70.00th=[ 70], 80.00th=[ 73], 90.00th=[ 87], 95.00th=[ 108], 00:22:23.026 | 99.00th=[ 188], 99.50th=[ 197], 99.90th=[ 211], 99.95th=[ 224], 00:22:23.026 | 99.99th=[ 243] 00:22:23.026 bw ( KiB/s): min=107008, max=489005, per=12.75%, avg=247601.50, stdev=91117.13, samples=20 00:22:23.026 iops : min= 418, max= 1910, avg=967.15, stdev=355.87, samples=20 00:22:23.026 lat (msec) : 20=0.21%, 50=19.82%, 100=73.59%, 250=6.38% 00:22:23.026 cpu : usr=0.33%, sys=3.19%, ctx=2388, majf=0, minf=4097 00:22:23.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:23.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:23.026 issued rwts: total=9742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.026 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:23.026 job2: (groupid=0, jobs=1): err= 0: pid=88106: Wed May 15 13:40:34 2024 00:22:23.026 read: IOPS=957, BW=239MiB/s (251MB/s)(2398MiB/10017msec) 00:22:23.026 slat (usec): min=18, max=17799, avg=1038.03, stdev=2217.61 00:22:23.026 clat (msec): min=12, max=105, avg=65.73, stdev= 8.81 00:22:23.026 lat (msec): min=16, max=113, avg=66.76, stdev= 8.88 00:22:23.026 clat percentiles (msec): 00:22:23.026 | 1.00th=[ 50], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 59], 00:22:23.026 | 30.00th=[ 62], 40.00th=[ 63], 50.00th=[ 65], 60.00th=[ 67], 00:22:23.026 | 70.00th=[ 69], 80.00th=[ 72], 90.00th=[ 78], 95.00th=[ 82], 00:22:23.026 | 99.00th=[ 92], 99.50th=[ 95], 99.90th=[ 102], 99.95th=[ 104], 00:22:23.026 | 99.99th=[ 106] 00:22:23.026 bw ( KiB/s): min=195193, max=273408, per=12.56%, avg=243796.65, stdev=18467.59, samples=20 00:22:23.026 iops : min= 762, max= 1068, avg=952.15, stdev=72.08, samples=20 00:22:23.026 lat (msec) : 20=0.11%, 50=1.19%, 100=98.53%, 250=0.17% 00:22:23.026 cpu : usr=0.47%, sys=3.07%, ctx=2605, majf=0, minf=4097 00:22:23.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:23.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:23.026 issued rwts: total=9590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.026 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:23.026 job3: (groupid=0, jobs=1): err= 0: pid=88107: Wed May 15 13:40:34 2024 00:22:23.026 read: IOPS=832, BW=208MiB/s (218MB/s)(2101MiB/10093msec) 00:22:23.026 slat (usec): min=18, max=64174, avg=1171.15, stdev=2859.96 00:22:23.026 clat (msec): min=19, max=205, avg=75.59, stdev=25.20 00:22:23.026 lat (msec): min=20, max=238, avg=76.76, stdev=25.54 00:22:23.026 clat percentiles (msec): 00:22:23.026 | 1.00th=[ 49], 5.00th=[ 55], 10.00th=[ 58], 20.00th=[ 61], 00:22:23.026 | 30.00th=[ 64], 40.00th=[ 66], 50.00th=[ 68], 60.00th=[ 70], 00:22:23.026 | 70.00th=[ 73], 80.00th=[ 82], 90.00th=[ 125], 95.00th=[ 138], 00:22:23.026 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 205], 99.95th=[ 205], 00:22:23.026 | 99.99th=[ 205] 00:22:23.026 bw ( KiB/s): min=112128, max=262144, per=11.01%, avg=213694.40, stdev=53457.49, samples=20 00:22:23.026 iops : min= 438, max= 1024, avg=834.35, stdev=208.75, samples=20 00:22:23.026 lat (msec) : 20=0.01%, 50=1.44%, 100=86.60%, 250=11.95% 00:22:23.026 cpu : usr=0.26%, sys=2.77%, ctx=2249, majf=0, minf=4097 00:22:23.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:23.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:23.026 issued rwts: total=8402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.026 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:23.026 job4: (groupid=0, jobs=1): err= 0: pid=88108: Wed May 15 13:40:34 2024 00:22:23.026 read: IOPS=643, BW=161MiB/s (169MB/s)(1626MiB/10101msec) 00:22:23.026 slat (usec): min=18, max=32749, avg=1534.92, stdev=3406.68 00:22:23.026 clat (msec): min=41, max=243, avg=97.78, stdev=23.00 00:22:23.026 lat (msec): min=41, max=243, avg=99.32, stdev=23.32 00:22:23.026 clat percentiles (msec): 00:22:23.026 | 1.00th=[ 53], 5.00th=[ 61], 10.00th=[ 66], 20.00th=[ 88], 00:22:23.026 | 30.00th=[ 91], 40.00th=[ 93], 50.00th=[ 95], 60.00th=[ 99], 00:22:23.026 | 70.00th=[ 102], 80.00th=[ 108], 90.00th=[ 130], 95.00th=[ 140], 00:22:23.026 | 99.00th=[ 163], 99.50th=[ 178], 99.90th=[ 236], 99.95th=[ 236], 00:22:23.026 | 99.99th=[ 245] 00:22:23.026 bw ( KiB/s): min=102912, max=243712, per=8.49%, avg=164832.15, stdev=33891.52, samples=20 00:22:23.026 iops : min= 402, max= 952, avg=643.75, stdev=132.33, samples=20 00:22:23.026 lat (msec) : 50=0.71%, 100=66.09%, 250=33.20% 00:22:23.026 cpu : usr=0.23%, sys=2.44%, ctx=1672, majf=0, minf=4097 00:22:23.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:22:23.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:23.026 issued rwts: total=6503,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.026 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:23.026 job5: (groupid=0, jobs=1): err= 0: pid=88115: Wed May 15 13:40:34 2024 00:22:23.026 read: IOPS=498, BW=125MiB/s (131MB/s)(1258MiB/10090msec) 00:22:23.026 slat (usec): min=18, max=65256, avg=1981.18, stdev=4833.58 00:22:23.026 clat (msec): min=12, max=201, avg=126.10, stdev=13.93 00:22:23.026 lat (msec): min=13, max=201, avg=128.09, stdev=14.49 00:22:23.026 clat percentiles (msec): 00:22:23.026 | 1.00th=[ 73], 5.00th=[ 110], 10.00th=[ 117], 20.00th=[ 121], 00:22:23.026 | 30.00th=[ 124], 40.00th=[ 126], 50.00th=[ 128], 60.00th=[ 129], 00:22:23.026 | 70.00th=[ 131], 80.00th=[ 134], 90.00th=[ 138], 95.00th=[ 142], 00:22:23.026 | 99.00th=[ 155], 99.50th=[ 163], 99.90th=[ 199], 99.95th=[ 201], 00:22:23.026 | 99.99th=[ 201] 00:22:23.026 bw ( KiB/s): min=116224, max=141082, per=6.55%, avg=127220.05, stdev=5645.56, samples=20 00:22:23.026 iops : min= 454, max= 551, avg=496.90, stdev=21.97, samples=20 00:22:23.026 lat (msec) : 20=0.24%, 50=0.44%, 100=1.81%, 250=97.52% 00:22:23.026 cpu : usr=0.25%, sys=1.76%, ctx=1692, majf=0, minf=4097 00:22:23.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:22:23.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:23.026 issued rwts: total=5033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.026 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:23.026 job6: (groupid=0, jobs=1): err= 0: pid=88116: Wed May 15 13:40:34 2024 00:22:23.026 read: IOPS=497, BW=124MiB/s (131MB/s)(1255MiB/10079msec) 00:22:23.026 slat (usec): min=18, max=55033, avg=1991.17, stdev=4578.27 00:22:23.026 clat (msec): min=16, max=187, avg=126.39, stdev=12.93 00:22:23.026 lat (msec): min=16, max=187, avg=128.38, stdev=13.28 00:22:23.026 clat percentiles (msec): 00:22:23.026 | 1.00th=[ 97], 5.00th=[ 111], 10.00th=[ 116], 20.00th=[ 121], 00:22:23.026 | 30.00th=[ 123], 40.00th=[ 126], 50.00th=[ 127], 60.00th=[ 129], 00:22:23.026 | 70.00th=[ 131], 80.00th=[ 134], 90.00th=[ 138], 95.00th=[ 144], 00:22:23.026 | 99.00th=[ 159], 99.50th=[ 169], 99.90th=[ 188], 99.95th=[ 188], 00:22:23.026 | 99.99th=[ 188] 00:22:23.026 bw ( KiB/s): min=117760, max=134144, per=6.54%, avg=126949.05, stdev=4216.39, samples=20 00:22:23.026 iops : min= 460, max= 524, avg=495.55, stdev=16.61, samples=20 00:22:23.026 lat (msec) : 20=0.40%, 50=0.08%, 100=1.53%, 250=97.99% 00:22:23.026 cpu : usr=0.24%, sys=1.83%, ctx=1381, majf=0, minf=4097 00:22:23.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:22:23.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:23.026 issued rwts: total=5018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.026 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:23.026 job7: (groupid=0, jobs=1): err= 0: pid=88117: Wed May 15 13:40:34 2024 00:22:23.026 read: IOPS=496, BW=124MiB/s (130MB/s)(1250MiB/10077msec) 00:22:23.026 slat (usec): min=18, max=66199, avg=1996.13, stdev=4574.53 00:22:23.026 clat (msec): min=23, max=212, avg=126.76, stdev=11.62 00:22:23.026 lat (msec): min=28, max=212, avg=128.76, stdev=12.00 00:22:23.026 clat percentiles (msec): 00:22:23.026 | 1.00th=[ 101], 5.00th=[ 112], 10.00th=[ 116], 20.00th=[ 121], 00:22:23.026 | 30.00th=[ 124], 40.00th=[ 126], 50.00th=[ 127], 60.00th=[ 129], 00:22:23.026 | 70.00th=[ 131], 80.00th=[ 134], 90.00th=[ 138], 95.00th=[ 142], 00:22:23.026 | 99.00th=[ 153], 99.50th=[ 169], 99.90th=[ 199], 99.95th=[ 199], 00:22:23.026 | 99.99th=[ 213] 00:22:23.026 bw ( KiB/s): min=114176, max=132854, per=6.51%, avg=126323.05, stdev=4624.23, samples=20 00:22:23.026 iops : min= 446, max= 518, avg=493.35, stdev=17.94, samples=20 00:22:23.026 lat (msec) : 50=0.44%, 100=0.52%, 250=99.04% 00:22:23.026 cpu : usr=0.15%, sys=1.86%, ctx=1467, majf=0, minf=4097 00:22:23.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:22:23.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:23.026 issued rwts: total=4999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.026 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:23.026 job8: (groupid=0, jobs=1): err= 0: pid=88118: Wed May 15 13:40:34 2024 00:22:23.026 read: IOPS=935, BW=234MiB/s (245MB/s)(2343MiB/10015msec) 00:22:23.026 slat (usec): min=18, max=38472, avg=1053.64, stdev=2320.07 00:22:23.026 clat (msec): min=12, max=111, avg=67.19, stdev=10.60 00:22:23.026 lat (msec): min=18, max=115, avg=68.25, stdev=10.70 00:22:23.026 clat percentiles (msec): 00:22:23.026 | 1.00th=[ 49], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 59], 00:22:23.026 | 30.00th=[ 62], 40.00th=[ 64], 50.00th=[ 66], 60.00th=[ 68], 00:22:23.026 | 70.00th=[ 71], 80.00th=[ 75], 90.00th=[ 82], 95.00th=[ 89], 00:22:23.026 | 99.00th=[ 99], 99.50th=[ 102], 99.90th=[ 107], 99.95th=[ 111], 00:22:23.026 | 99.99th=[ 112] 00:22:23.026 bw ( KiB/s): min=169811, max=271360, per=12.27%, avg=238285.95, stdev=27010.83, samples=20 00:22:23.026 iops : min= 663, max= 1060, avg=930.75, stdev=105.61, samples=20 00:22:23.026 lat (msec) : 20=0.03%, 50=1.21%, 100=98.00%, 250=0.76% 00:22:23.026 cpu : usr=0.37%, sys=3.09%, ctx=2409, majf=0, minf=4097 00:22:23.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:23.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:23.026 issued rwts: total=9372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.026 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:23.026 job9: (groupid=0, jobs=1): err= 0: pid=88119: Wed May 15 13:40:34 2024 00:22:23.027 read: IOPS=491, BW=123MiB/s (129MB/s)(1237MiB/10063msec) 00:22:23.027 slat (usec): min=18, max=88331, avg=2016.48, stdev=4818.97 00:22:23.027 clat (msec): min=60, max=200, avg=128.07, stdev=10.42 00:22:23.027 lat (msec): min=91, max=200, avg=130.08, stdev=10.78 00:22:23.027 clat percentiles (msec): 00:22:23.027 | 1.00th=[ 104], 5.00th=[ 113], 10.00th=[ 117], 20.00th=[ 122], 00:22:23.027 | 30.00th=[ 124], 40.00th=[ 126], 50.00th=[ 128], 60.00th=[ 130], 00:22:23.027 | 70.00th=[ 132], 80.00th=[ 136], 90.00th=[ 140], 95.00th=[ 146], 00:22:23.027 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 199], 99.95th=[ 199], 00:22:23.027 | 99.99th=[ 201] 00:22:23.027 bw ( KiB/s): min=107008, max=135920, per=6.44%, avg=125042.40, stdev=6610.14, samples=20 00:22:23.027 iops : min= 418, max= 530, avg=488.40, stdev=25.74, samples=20 00:22:23.027 lat (msec) : 100=0.49%, 250=99.51% 00:22:23.027 cpu : usr=0.20%, sys=1.65%, ctx=1307, majf=0, minf=4097 00:22:23.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:22:23.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:23.027 issued rwts: total=4948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.027 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:23.027 job10: (groupid=0, jobs=1): err= 0: pid=88120: Wed May 15 13:40:34 2024 00:22:23.027 read: IOPS=647, BW=162MiB/s (170MB/s)(1637MiB/10110msec) 00:22:23.027 slat (usec): min=18, max=34655, avg=1518.45, stdev=3348.75 00:22:23.027 clat (msec): min=9, max=226, avg=97.18, stdev=21.81 00:22:23.027 lat (msec): min=9, max=226, avg=98.70, stdev=22.16 00:22:23.027 clat percentiles (msec): 00:22:23.027 | 1.00th=[ 46], 5.00th=[ 61], 10.00th=[ 68], 20.00th=[ 88], 00:22:23.027 | 30.00th=[ 92], 40.00th=[ 93], 50.00th=[ 95], 60.00th=[ 97], 00:22:23.027 | 70.00th=[ 101], 80.00th=[ 108], 90.00th=[ 129], 95.00th=[ 136], 00:22:23.027 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 226], 99.95th=[ 226], 00:22:23.027 | 99.99th=[ 228] 00:22:23.027 bw ( KiB/s): min=109056, max=247808, per=8.55%, avg=165960.20, stdev=32365.47, samples=20 00:22:23.027 iops : min= 426, max= 968, avg=648.20, stdev=126.48, samples=20 00:22:23.027 lat (msec) : 10=0.02%, 20=0.05%, 50=1.56%, 100=66.92%, 250=31.46% 00:22:23.027 cpu : usr=0.29%, sys=2.27%, ctx=1714, majf=0, minf=4097 00:22:23.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:22:23.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:23.027 issued rwts: total=6548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.027 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:23.027 00:22:23.027 Run status group 0 (all jobs): 00:22:23.027 READ: bw=1896MiB/s (1988MB/s), 123MiB/s-241MiB/s (129MB/s-253MB/s), io=18.7GiB (20.1GB), run=10015-10110msec 00:22:23.027 00:22:23.027 Disk stats (read/write): 00:22:23.027 nvme0n1: ios=12931/0, merge=0/0, ticks=1228357/0, in_queue=1228357, util=97.71% 00:22:23.027 nvme10n1: ios=19374/0, merge=0/0, ticks=1230731/0, in_queue=1230731, util=97.90% 00:22:23.027 nvme1n1: ios=19087/0, merge=0/0, ticks=1237579/0, in_queue=1237579, util=98.05% 00:22:23.027 nvme2n1: ios=16678/0, merge=0/0, ticks=1226617/0, in_queue=1226617, util=98.03% 00:22:23.027 nvme3n1: ios=12889/0, merge=0/0, ticks=1227198/0, in_queue=1227198, util=98.27% 00:22:23.027 nvme4n1: ios=9953/0, merge=0/0, ticks=1227868/0, in_queue=1227868, util=98.49% 00:22:23.027 nvme5n1: ios=9910/0, merge=0/0, ticks=1223772/0, in_queue=1223772, util=98.48% 00:22:23.027 nvme6n1: ios=9877/0, merge=0/0, ticks=1224331/0, in_queue=1224331, util=98.46% 00:22:23.027 nvme7n1: ios=18688/0, merge=0/0, ticks=1237267/0, in_queue=1237267, util=98.82% 00:22:23.027 nvme8n1: ios=9775/0, merge=0/0, ticks=1223770/0, in_queue=1223770, util=98.81% 00:22:23.027 nvme9n1: ios=12972/0, merge=0/0, ticks=1228620/0, in_queue=1228620, util=99.08% 00:22:23.027 13:40:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:23.027 [global] 00:22:23.027 thread=1 00:22:23.027 invalidate=1 00:22:23.027 rw=randwrite 00:22:23.027 time_based=1 00:22:23.027 runtime=10 00:22:23.027 ioengine=libaio 00:22:23.027 direct=1 00:22:23.027 bs=262144 00:22:23.027 iodepth=64 00:22:23.027 norandommap=1 00:22:23.027 numjobs=1 00:22:23.027 00:22:23.027 [job0] 00:22:23.027 filename=/dev/nvme0n1 00:22:23.027 [job1] 00:22:23.027 filename=/dev/nvme10n1 00:22:23.027 [job2] 00:22:23.027 filename=/dev/nvme1n1 00:22:23.027 [job3] 00:22:23.027 filename=/dev/nvme2n1 00:22:23.027 [job4] 00:22:23.027 filename=/dev/nvme3n1 00:22:23.027 [job5] 00:22:23.027 filename=/dev/nvme4n1 00:22:23.027 [job6] 00:22:23.027 filename=/dev/nvme5n1 00:22:23.027 [job7] 00:22:23.027 filename=/dev/nvme6n1 00:22:23.027 [job8] 00:22:23.027 filename=/dev/nvme7n1 00:22:23.027 [job9] 00:22:23.027 filename=/dev/nvme8n1 00:22:23.027 [job10] 00:22:23.027 filename=/dev/nvme9n1 00:22:23.027 Could not set queue depth (nvme0n1) 00:22:23.027 Could not set queue depth (nvme10n1) 00:22:23.027 Could not set queue depth (nvme1n1) 00:22:23.027 Could not set queue depth (nvme2n1) 00:22:23.027 Could not set queue depth (nvme3n1) 00:22:23.027 Could not set queue depth (nvme4n1) 00:22:23.027 Could not set queue depth (nvme5n1) 00:22:23.027 Could not set queue depth (nvme6n1) 00:22:23.027 Could not set queue depth (nvme7n1) 00:22:23.027 Could not set queue depth (nvme8n1) 00:22:23.027 Could not set queue depth (nvme9n1) 00:22:23.027 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:23.027 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:23.027 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:23.027 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:23.027 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:23.027 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:23.027 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:23.027 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:23.027 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:23.027 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:23.027 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:23.027 fio-3.35 00:22:23.027 Starting 11 threads 00:22:32.997 00:22:32.997 job0: (groupid=0, jobs=1): err= 0: pid=88318: Wed May 15 13:40:45 2024 00:22:32.997 write: IOPS=382, BW=95.6MiB/s (100MB/s)(973MiB/10183msec); 0 zone resets 00:22:32.997 slat (usec): min=20, max=53400, avg=2564.63, stdev=4479.62 00:22:32.997 clat (msec): min=55, max=374, avg=164.80, stdev=21.01 00:22:32.997 lat (msec): min=55, max=374, avg=167.37, stdev=20.77 00:22:32.997 clat percentiles (msec): 00:22:32.997 | 1.00th=[ 127], 5.00th=[ 150], 10.00th=[ 153], 20.00th=[ 155], 00:22:32.997 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 163], 00:22:32.997 | 70.00th=[ 165], 80.00th=[ 167], 90.00th=[ 184], 95.00th=[ 194], 00:22:32.997 | 99.00th=[ 253], 99.50th=[ 313], 99.90th=[ 363], 99.95th=[ 376], 00:22:32.997 | 99.99th=[ 376] 00:22:32.997 bw ( KiB/s): min=82432, max=102400, per=7.12%, avg=98022.40, stdev=5955.80, samples=20 00:22:32.997 iops : min= 322, max= 400, avg=382.90, stdev=23.26, samples=20 00:22:32.997 lat (msec) : 100=0.62%, 250=98.30%, 500=1.08% 00:22:32.997 cpu : usr=0.85%, sys=1.19%, ctx=6158, majf=0, minf=1 00:22:32.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:22:32.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.997 issued rwts: total=0,3892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.997 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.997 job1: (groupid=0, jobs=1): err= 0: pid=88319: Wed May 15 13:40:45 2024 00:22:32.997 write: IOPS=670, BW=168MiB/s (176MB/s)(1698MiB/10124msec); 0 zone resets 00:22:32.997 slat (usec): min=19, max=37155, avg=1447.45, stdev=2637.70 00:22:32.997 clat (usec): min=1978, max=250957, avg=93910.65, stdev=28541.98 00:22:32.997 lat (msec): min=2, max=250, avg=95.36, stdev=28.87 00:22:32.997 clat percentiles (msec): 00:22:32.997 | 1.00th=[ 19], 5.00th=[ 55], 10.00th=[ 57], 20.00th=[ 62], 00:22:32.997 | 30.00th=[ 92], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 101], 00:22:32.997 | 70.00th=[ 103], 80.00th=[ 104], 90.00th=[ 122], 95.00th=[ 153], 00:22:32.997 | 99.00th=[ 174], 99.50th=[ 188], 99.90th=[ 234], 99.95th=[ 243], 00:22:32.997 | 99.99th=[ 251] 00:22:32.997 bw ( KiB/s): min=105984, max=289280, per=12.52%, avg=172262.40, stdev=43716.11, samples=20 00:22:32.997 iops : min= 414, max= 1130, avg=672.90, stdev=170.77, samples=20 00:22:32.997 lat (msec) : 2=0.01%, 4=0.03%, 10=0.41%, 20=0.69%, 50=2.09% 00:22:32.997 lat (msec) : 100=57.39%, 250=39.34%, 500=0.03% 00:22:32.997 cpu : usr=1.59%, sys=1.58%, ctx=5608, majf=0, minf=1 00:22:32.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:32.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.997 issued rwts: total=0,6792,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.997 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.997 job2: (groupid=0, jobs=1): err= 0: pid=88331: Wed May 15 13:40:45 2024 00:22:32.997 write: IOPS=382, BW=95.6MiB/s (100MB/s)(974MiB/10188msec); 0 zone resets 00:22:32.997 slat (usec): min=24, max=58819, avg=2562.66, stdev=4494.90 00:22:32.997 clat (msec): min=10, max=378, avg=164.71, stdev=24.31 00:22:32.997 lat (msec): min=10, max=378, avg=167.28, stdev=24.18 00:22:32.997 clat percentiles (msec): 00:22:32.997 | 1.00th=[ 64], 5.00th=[ 150], 10.00th=[ 153], 20.00th=[ 157], 00:22:32.997 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 163], 60.00th=[ 165], 00:22:32.997 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 184], 95.00th=[ 194], 00:22:32.997 | 99.00th=[ 257], 99.50th=[ 317], 99.90th=[ 368], 99.95th=[ 380], 00:22:32.997 | 99.99th=[ 380] 00:22:32.997 bw ( KiB/s): min=82432, max=102912, per=7.13%, avg=98124.80, stdev=5599.51, samples=20 00:22:32.997 iops : min= 322, max= 402, avg=383.30, stdev=21.87, samples=20 00:22:32.997 lat (msec) : 20=0.21%, 50=0.51%, 100=0.82%, 250=97.38%, 500=1.08% 00:22:32.997 cpu : usr=1.04%, sys=0.96%, ctx=6172, majf=0, minf=1 00:22:32.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:22:32.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.997 issued rwts: total=0,3896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.997 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.997 job3: (groupid=0, jobs=1): err= 0: pid=88332: Wed May 15 13:40:45 2024 00:22:32.997 write: IOPS=475, BW=119MiB/s (125MB/s)(1202MiB/10110msec); 0 zone resets 00:22:32.997 slat (usec): min=29, max=35537, avg=2074.93, stdev=3569.08 00:22:32.997 clat (msec): min=38, max=230, avg=132.36, stdev=14.68 00:22:32.997 lat (msec): min=38, max=230, avg=134.43, stdev=14.46 00:22:32.997 clat percentiles (msec): 00:22:32.997 | 1.00th=[ 115], 5.00th=[ 118], 10.00th=[ 120], 20.00th=[ 125], 00:22:32.997 | 30.00th=[ 126], 40.00th=[ 127], 50.00th=[ 129], 60.00th=[ 131], 00:22:32.997 | 70.00th=[ 136], 80.00th=[ 140], 90.00th=[ 155], 95.00th=[ 161], 00:22:32.997 | 99.00th=[ 174], 99.50th=[ 184], 99.90th=[ 224], 99.95th=[ 224], 00:22:32.997 | 99.99th=[ 230] 00:22:32.997 bw ( KiB/s): min=98304, max=131072, per=8.83%, avg=121472.00, stdev=10128.74, samples=20 00:22:32.997 iops : min= 384, max= 512, avg=474.50, stdev=39.57, samples=20 00:22:32.997 lat (msec) : 50=0.17%, 100=0.50%, 250=99.33% 00:22:32.997 cpu : usr=1.11%, sys=1.37%, ctx=10858, majf=0, minf=1 00:22:32.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:32.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.997 issued rwts: total=0,4808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.997 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.997 job4: (groupid=0, jobs=1): err= 0: pid=88333: Wed May 15 13:40:45 2024 00:22:32.997 write: IOPS=592, BW=148MiB/s (155MB/s)(1495MiB/10086msec); 0 zone resets 00:22:32.997 slat (usec): min=25, max=17028, avg=1667.26, stdev=2847.28 00:22:32.997 clat (msec): min=15, max=185, avg=106.26, stdev=15.21 00:22:32.997 lat (msec): min=15, max=185, avg=107.93, stdev=15.18 00:22:32.998 clat percentiles (msec): 00:22:32.998 | 1.00th=[ 86], 5.00th=[ 89], 10.00th=[ 93], 20.00th=[ 95], 00:22:32.998 | 30.00th=[ 100], 40.00th=[ 103], 50.00th=[ 104], 60.00th=[ 106], 00:22:32.998 | 70.00th=[ 108], 80.00th=[ 115], 90.00th=[ 128], 95.00th=[ 136], 00:22:32.998 | 99.00th=[ 155], 99.50th=[ 159], 99.90th=[ 180], 99.95th=[ 180], 00:22:32.998 | 99.99th=[ 186] 00:22:32.998 bw ( KiB/s): min=110592, max=176128, per=11.00%, avg=151449.60, stdev=16827.35, samples=20 00:22:32.998 iops : min= 432, max= 688, avg=591.60, stdev=65.73, samples=20 00:22:32.998 lat (msec) : 20=0.07%, 50=0.33%, 100=32.66%, 250=66.93% 00:22:32.998 cpu : usr=1.38%, sys=1.55%, ctx=13676, majf=0, minf=1 00:22:32.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:22:32.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.998 issued rwts: total=0,5979,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.998 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.998 job5: (groupid=0, jobs=1): err= 0: pid=88337: Wed May 15 13:40:45 2024 00:22:32.998 write: IOPS=475, BW=119MiB/s (125MB/s)(1203MiB/10111msec); 0 zone resets 00:22:32.998 slat (usec): min=26, max=24104, avg=2071.11, stdev=3560.32 00:22:32.998 clat (msec): min=18, max=234, avg=132.24, stdev=16.20 00:22:32.998 lat (msec): min=18, max=234, avg=134.31, stdev=16.05 00:22:32.998 clat percentiles (msec): 00:22:32.998 | 1.00th=[ 103], 5.00th=[ 118], 10.00th=[ 120], 20.00th=[ 125], 00:22:32.998 | 30.00th=[ 126], 40.00th=[ 127], 50.00th=[ 128], 60.00th=[ 131], 00:22:32.998 | 70.00th=[ 136], 80.00th=[ 140], 90.00th=[ 157], 95.00th=[ 161], 00:22:32.998 | 99.00th=[ 174], 99.50th=[ 188], 99.90th=[ 226], 99.95th=[ 228], 00:22:32.998 | 99.99th=[ 234] 00:22:32.998 bw ( KiB/s): min=100352, max=131584, per=8.83%, avg=121523.20, stdev=9696.43, samples=20 00:22:32.998 iops : min= 392, max= 514, avg=474.70, stdev=37.88, samples=20 00:22:32.998 lat (msec) : 20=0.08%, 50=0.42%, 100=0.50%, 250=99.00% 00:22:32.998 cpu : usr=1.16%, sys=1.24%, ctx=10749, majf=0, minf=1 00:22:32.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:32.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.998 issued rwts: total=0,4810,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.998 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.998 job6: (groupid=0, jobs=1): err= 0: pid=88338: Wed May 15 13:40:45 2024 00:22:32.998 write: IOPS=592, BW=148MiB/s (155MB/s)(1496MiB/10094msec); 0 zone resets 00:22:32.998 slat (usec): min=27, max=14324, avg=1667.18, stdev=2852.78 00:22:32.998 clat (msec): min=8, max=189, avg=106.27, stdev=15.27 00:22:32.998 lat (msec): min=8, max=189, avg=107.94, stdev=15.24 00:22:32.998 clat percentiles (msec): 00:22:32.998 | 1.00th=[ 86], 5.00th=[ 89], 10.00th=[ 93], 20.00th=[ 95], 00:22:32.998 | 30.00th=[ 100], 40.00th=[ 103], 50.00th=[ 104], 60.00th=[ 106], 00:22:32.998 | 70.00th=[ 108], 80.00th=[ 115], 90.00th=[ 128], 95.00th=[ 136], 00:22:32.998 | 99.00th=[ 155], 99.50th=[ 159], 99.90th=[ 184], 99.95th=[ 184], 00:22:32.998 | 99.99th=[ 190] 00:22:32.998 bw ( KiB/s): min=110592, max=176128, per=11.01%, avg=151565.00, stdev=16549.73, samples=20 00:22:32.998 iops : min= 432, max= 688, avg=592.05, stdev=64.65, samples=20 00:22:32.998 lat (msec) : 10=0.03%, 20=0.07%, 50=0.33%, 100=32.32%, 250=67.24% 00:22:32.998 cpu : usr=1.29%, sys=1.62%, ctx=13752, majf=0, minf=1 00:22:32.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:22:32.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.998 issued rwts: total=0,5983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.998 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.998 job7: (groupid=0, jobs=1): err= 0: pid=88339: Wed May 15 13:40:45 2024 00:22:32.998 write: IOPS=385, BW=96.3MiB/s (101MB/s)(981MiB/10189msec); 0 zone resets 00:22:32.998 slat (usec): min=25, max=17609, avg=2546.02, stdev=4397.30 00:22:32.998 clat (msec): min=16, max=373, avg=163.55, stdev=23.95 00:22:32.998 lat (msec): min=16, max=373, avg=166.09, stdev=23.84 00:22:32.998 clat percentiles (msec): 00:22:32.998 | 1.00th=[ 68], 5.00th=[ 150], 10.00th=[ 153], 20.00th=[ 155], 00:22:32.998 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 163], 00:22:32.998 | 70.00th=[ 165], 80.00th=[ 167], 90.00th=[ 184], 95.00th=[ 194], 00:22:32.998 | 99.00th=[ 251], 99.50th=[ 313], 99.90th=[ 363], 99.95th=[ 376], 00:22:32.998 | 99.99th=[ 376] 00:22:32.998 bw ( KiB/s): min=83968, max=112640, per=7.18%, avg=98816.00, stdev=6539.94, samples=20 00:22:32.998 iops : min= 328, max= 440, avg=386.00, stdev=25.55, samples=20 00:22:32.998 lat (msec) : 20=0.10%, 50=0.61%, 100=0.61%, 250=97.60%, 500=1.07% 00:22:32.998 cpu : usr=1.04%, sys=0.84%, ctx=6974, majf=0, minf=1 00:22:32.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:22:32.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.998 issued rwts: total=0,3924,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.998 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.998 job8: (groupid=0, jobs=1): err= 0: pid=88340: Wed May 15 13:40:45 2024 00:22:32.998 write: IOPS=378, BW=94.7MiB/s (99.3MB/s)(964MiB/10187msec); 0 zone resets 00:22:32.998 slat (usec): min=24, max=80971, avg=2590.28, stdev=4619.98 00:22:32.998 clat (msec): min=83, max=370, avg=166.36, stdev=19.63 00:22:32.998 lat (msec): min=83, max=370, avg=168.96, stdev=19.29 00:22:32.998 clat percentiles (msec): 00:22:32.998 | 1.00th=[ 142], 5.00th=[ 153], 10.00th=[ 153], 20.00th=[ 157], 00:22:32.998 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 163], 60.00th=[ 165], 00:22:32.998 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 184], 95.00th=[ 194], 00:22:32.998 | 99.00th=[ 249], 99.50th=[ 309], 99.90th=[ 359], 99.95th=[ 372], 00:22:32.998 | 99.99th=[ 372] 00:22:32.998 bw ( KiB/s): min=83968, max=102400, per=7.06%, avg=97109.25, stdev=6378.01, samples=20 00:22:32.998 iops : min= 328, max= 400, avg=379.30, stdev=24.98, samples=20 00:22:32.998 lat (msec) : 100=0.13%, 250=98.89%, 500=0.99% 00:22:32.998 cpu : usr=0.82%, sys=1.06%, ctx=8117, majf=0, minf=1 00:22:32.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:22:32.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.998 issued rwts: total=0,3857,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.998 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.998 job9: (groupid=0, jobs=1): err= 0: pid=88341: Wed May 15 13:40:45 2024 00:22:32.998 write: IOPS=597, BW=149MiB/s (157MB/s)(1511MiB/10112msec); 0 zone resets 00:22:32.998 slat (usec): min=23, max=30396, avg=1630.26, stdev=2862.48 00:22:32.998 clat (msec): min=32, max=236, avg=105.43, stdev=17.98 00:22:32.998 lat (msec): min=32, max=236, avg=107.06, stdev=18.04 00:22:32.998 clat percentiles (msec): 00:22:32.998 | 1.00th=[ 68], 5.00th=[ 88], 10.00th=[ 91], 20.00th=[ 94], 00:22:32.998 | 30.00th=[ 95], 40.00th=[ 97], 50.00th=[ 102], 60.00th=[ 103], 00:22:32.998 | 70.00th=[ 107], 80.00th=[ 122], 90.00th=[ 129], 95.00th=[ 138], 00:22:32.998 | 99.00th=[ 157], 99.50th=[ 174], 99.90th=[ 222], 99.95th=[ 230], 00:22:32.998 | 99.99th=[ 236] 00:22:32.998 bw ( KiB/s): min=111104, max=183296, per=11.12%, avg=153088.00, stdev=20481.35, samples=20 00:22:32.998 iops : min= 434, max= 716, avg=598.00, stdev=80.01, samples=20 00:22:32.998 lat (msec) : 50=0.23%, 100=45.11%, 250=54.66% 00:22:32.998 cpu : usr=1.36%, sys=1.47%, ctx=3910, majf=0, minf=1 00:22:32.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:32.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.998 issued rwts: total=0,6043,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.998 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.998 job10: (groupid=0, jobs=1): err= 0: pid=88342: Wed May 15 13:40:45 2024 00:22:32.998 write: IOPS=474, BW=119MiB/s (124MB/s)(1198MiB/10107msec); 0 zone resets 00:22:32.998 slat (usec): min=27, max=77258, avg=2083.06, stdev=3704.37 00:22:32.998 clat (msec): min=79, max=223, avg=132.86, stdev=14.13 00:22:32.998 lat (msec): min=79, max=224, avg=134.94, stdev=13.87 00:22:32.998 clat percentiles (msec): 00:22:32.998 | 1.00th=[ 116], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 125], 00:22:32.998 | 30.00th=[ 126], 40.00th=[ 127], 50.00th=[ 129], 60.00th=[ 131], 00:22:32.998 | 70.00th=[ 136], 80.00th=[ 140], 90.00th=[ 155], 95.00th=[ 161], 00:22:32.998 | 99.00th=[ 174], 99.50th=[ 197], 99.90th=[ 218], 99.95th=[ 218], 00:22:32.998 | 99.99th=[ 224] 00:22:32.998 bw ( KiB/s): min=87552, max=131072, per=8.80%, avg=121062.40, stdev=11590.57, samples=20 00:22:32.998 iops : min= 342, max= 512, avg=472.90, stdev=45.28, samples=20 00:22:32.998 lat (msec) : 100=0.25%, 250=99.75% 00:22:32.998 cpu : usr=1.09%, sys=1.21%, ctx=11203, majf=0, minf=1 00:22:32.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:32.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.998 issued rwts: total=0,4792,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.998 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.998 00:22:32.998 Run status group 0 (all jobs): 00:22:32.998 WRITE: bw=1344MiB/s (1409MB/s), 94.7MiB/s-168MiB/s (99.3MB/s-176MB/s), io=13.4GiB (14.4GB), run=10086-10189msec 00:22:32.998 00:22:32.998 Disk stats (read/write): 00:22:32.998 nvme0n1: ios=49/7765, merge=0/0, ticks=46/1231482, in_queue=1231528, util=97.32% 00:22:32.998 nvme10n1: ios=49/13407, merge=0/0, ticks=35/1209371, in_queue=1209406, util=97.62% 00:22:32.998 nvme1n1: ios=42/7786, merge=0/0, ticks=49/1233573, in_queue=1233622, util=97.79% 00:22:32.998 nvme2n1: ios=34/9413, merge=0/0, ticks=58/1206012, in_queue=1206070, util=97.63% 00:22:32.998 nvme3n1: ios=5/11733, merge=0/0, ticks=12/1206830, in_queue=1206842, util=97.53% 00:22:32.998 nvme4n1: ios=0/9429, merge=0/0, ticks=0/1205806, in_queue=1205806, util=97.75% 00:22:32.998 nvme5n1: ios=0/11753, merge=0/0, ticks=0/1208837, in_queue=1208837, util=98.09% 00:22:32.998 nvme6n1: ios=0/7827, merge=0/0, ticks=0/1231844, in_queue=1231844, util=98.11% 00:22:32.998 nvme7n1: ios=0/7690, merge=0/0, ticks=0/1231411, in_queue=1231411, util=98.44% 00:22:32.998 nvme8n1: ios=0/11881, merge=0/0, ticks=0/1206551, in_queue=1206551, util=98.69% 00:22:32.998 nvme9n1: ios=0/9371, merge=0/0, ticks=0/1205582, in_queue=1205582, util=98.82% 00:22:32.998 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:22:32.998 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:22:32.998 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:32.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:32.999 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:32.999 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:32.999 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:32.999 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:22:32.999 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:22:32.999 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:22:32.999 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:22:32.999 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:22:32.999 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:33.000 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:22:33.000 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:33.000 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:22:33.000 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:22:33.000 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:22:33.000 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.000 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:33.000 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.000 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:33.000 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:33.000 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:33.000 13:40:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:33.000 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:22:33.000 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:33.000 13:40:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:22:33.000 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:33.000 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:22:33.000 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:22:33.000 13:40:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:33.000 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.000 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:33.000 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.000 13:40:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:33.000 13:40:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:33.258 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:33.258 rmmod nvme_tcp 00:22:33.258 rmmod nvme_fabrics 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 87646 ']' 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 87646 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 87646 ']' 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 87646 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87646 00:22:33.258 killing process with pid 87646 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87646' 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 87646 00:22:33.258 [2024-05-15 13:40:46.285902] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:33.258 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 87646 00:22:33.827 13:40:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:33.827 13:40:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:33.827 13:40:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:33.827 13:40:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:33.827 13:40:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:33.827 13:40:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.827 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:33.827 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.827 13:40:46 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:33.827 ************************************ 00:22:33.827 END TEST nvmf_multiconnection 00:22:33.827 ************************************ 00:22:33.827 00:22:33.827 real 0m49.478s 00:22:33.827 user 2m39.935s 00:22:33.827 sys 0m36.446s 00:22:33.827 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:33.827 13:40:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:33.827 13:40:46 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:33.827 13:40:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:33.827 13:40:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:33.827 13:40:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:33.827 ************************************ 00:22:33.827 START TEST nvmf_initiator_timeout 00:22:33.827 ************************************ 00:22:33.827 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:34.087 * Looking for test storage... 00:22:34.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:34.087 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:34.088 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:34.088 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:34.088 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:34.088 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:34.088 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.088 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:34.088 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:34.088 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:34.088 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:34.088 13:40:46 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:34.088 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:34.088 Cannot find device "nvmf_tgt_br" 00:22:34.088 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 00:22:34.088 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:34.088 Cannot find device "nvmf_tgt_br2" 00:22:34.088 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 00:22:34.088 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:34.088 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:34.088 Cannot find device "nvmf_tgt_br" 00:22:34.088 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 00:22:34.088 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:34.088 Cannot find device "nvmf_tgt_br2" 00:22:34.088 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 00:22:34.088 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:34.088 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:34.088 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:34.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:34.088 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:22:34.088 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:34.088 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:34.088 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:22:34.088 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:34.088 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:34.088 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:34.088 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:34.088 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:34.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:34.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:22:34.346 00:22:34.346 --- 10.0.0.2 ping statistics --- 00:22:34.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.346 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:34.346 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:34.346 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:22:34.346 00:22:34.346 --- 10.0.0.3 ping statistics --- 00:22:34.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.346 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:34.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:34.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:22:34.346 00:22:34.346 --- 10.0.0.1 ping statistics --- 00:22:34.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.346 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=88702 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 88702 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 88702 ']' 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:34.346 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.347 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:34.347 13:40:47 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:34.604 [2024-05-15 13:40:47.470353] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:22:34.604 [2024-05-15 13:40:47.470724] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.604 [2024-05-15 13:40:47.614037] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:34.604 [2024-05-15 13:40:47.632039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:34.604 [2024-05-15 13:40:47.686916] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.604 [2024-05-15 13:40:47.687255] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.604 [2024-05-15 13:40:47.687458] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.604 [2024-05-15 13:40:47.687636] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.604 [2024-05-15 13:40:47.687812] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.604 [2024-05-15 13:40:47.688143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.604 [2024-05-15 13:40:47.688296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.604 [2024-05-15 13:40:47.688346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:34.604 [2024-05-15 13:40:47.688352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:35.540 Malloc0 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:35.540 Delay0 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:35.540 [2024-05-15 13:40:48.440196] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:35.540 [2024-05-15 13:40:48.468133] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:35.540 [2024-05-15 13:40:48.468924] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:22:35.540 13:40:48 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:22:38.115 13:40:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:22:38.115 13:40:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:22:38.115 13:40:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:22:38.115 13:40:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:22:38.115 13:40:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:22:38.115 13:40:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:22:38.115 13:40:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=88772 00:22:38.115 13:40:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:22:38.115 13:40:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:22:38.115 [global] 00:22:38.115 thread=1 00:22:38.115 invalidate=1 00:22:38.115 rw=write 00:22:38.115 time_based=1 00:22:38.115 runtime=60 00:22:38.115 ioengine=libaio 00:22:38.115 direct=1 00:22:38.115 bs=4096 00:22:38.115 iodepth=1 00:22:38.115 norandommap=0 00:22:38.115 numjobs=1 00:22:38.115 00:22:38.115 verify_dump=1 00:22:38.115 verify_backlog=512 00:22:38.115 verify_state_save=0 00:22:38.115 do_verify=1 00:22:38.115 verify=crc32c-intel 00:22:38.115 [job0] 00:22:38.115 filename=/dev/nvme0n1 00:22:38.115 Could not set queue depth (nvme0n1) 00:22:38.115 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:38.115 fio-3.35 00:22:38.115 Starting 1 thread 00:22:40.644 13:40:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:22:40.644 13:40:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.644 13:40:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:40.644 true 00:22:40.644 13:40:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.644 13:40:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:22:40.644 13:40:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.644 13:40:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:40.644 true 00:22:40.644 13:40:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.644 13:40:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:22:40.644 13:40:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.644 13:40:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:40.644 true 00:22:40.644 13:40:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.644 13:40:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:22:40.644 13:40:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:40.644 13:40:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:40.644 true 00:22:40.644 13:40:53 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:40.644 13:40:53 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:22:43.924 13:40:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:22:43.924 13:40:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.924 13:40:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:43.924 true 00:22:43.924 13:40:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.924 13:40:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:22:43.924 13:40:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.924 13:40:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:43.924 true 00:22:43.924 13:40:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.924 13:40:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:22:43.924 13:40:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.924 13:40:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:43.924 true 00:22:43.924 13:40:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.924 13:40:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:22:43.924 13:40:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.924 13:40:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:43.924 true 00:22:43.924 13:40:56 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.924 13:40:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:22:43.924 13:40:56 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 88772 00:23:40.145 00:23:40.145 job0: (groupid=0, jobs=1): err= 0: pid=88793: Wed May 15 13:41:50 2024 00:23:40.145 read: IOPS=931, BW=3724KiB/s (3814kB/s)(218MiB/59987msec) 00:23:40.145 slat (usec): min=7, max=506, avg=10.74, stdev= 4.17 00:23:40.145 clat (usec): min=6, max=1636, avg=181.48, stdev=17.61 00:23:40.145 lat (usec): min=142, max=1664, avg=192.21, stdev=18.91 00:23:40.145 clat percentiles (usec): 00:23:40.145 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:23:40.145 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 184], 00:23:40.145 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 210], 00:23:40.145 | 99.00th=[ 227], 99.50th=[ 235], 99.90th=[ 262], 99.95th=[ 277], 00:23:40.145 | 99.99th=[ 437] 00:23:40.145 write: IOPS=938, BW=3755KiB/s (3846kB/s)(220MiB/59987msec); 0 zone resets 00:23:40.145 slat (usec): min=9, max=16139, avg=16.84, stdev=75.91 00:23:40.145 clat (usec): min=84, max=40678k, avg=855.37, stdev=171404.25 00:23:40.145 lat (usec): min=109, max=40678k, avg=872.21, stdev=171404.26 00:23:40.145 clat percentiles (usec): 00:23:40.145 | 1.00th=[ 109], 5.00th=[ 114], 10.00th=[ 117], 20.00th=[ 121], 00:23:40.145 | 30.00th=[ 125], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 135], 00:23:40.145 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 159], 00:23:40.145 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 204], 99.95th=[ 219], 00:23:40.145 | 99.99th=[ 619] 00:23:40.145 bw ( KiB/s): min= 136, max=12288, per=100.00%, avg=11250.26, stdev=2094.56, samples=39 00:23:40.145 iops : min= 34, max= 3072, avg=2812.56, stdev=523.64, samples=39 00:23:40.145 lat (usec) : 10=0.01%, 100=0.02%, 250=99.87%, 500=0.10%, 750=0.01% 00:23:40.145 lat (usec) : 1000=0.01% 00:23:40.145 lat (msec) : 2=0.01%, 10=0.01%, >=2000=0.01% 00:23:40.145 cpu : usr=0.52%, sys=2.02%, ctx=112696, majf=0, minf=2 00:23:40.145 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:40.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:40.145 issued rwts: total=55853,56320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:40.145 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:40.145 00:23:40.145 Run status group 0 (all jobs): 00:23:40.145 READ: bw=3724KiB/s (3814kB/s), 3724KiB/s-3724KiB/s (3814kB/s-3814kB/s), io=218MiB (229MB), run=59987-59987msec 00:23:40.145 WRITE: bw=3755KiB/s (3846kB/s), 3755KiB/s-3755KiB/s (3846kB/s-3846kB/s), io=220MiB (231MB), run=59987-59987msec 00:23:40.145 00:23:40.145 Disk stats (read/write): 00:23:40.145 nvme0n1: ios=56050/55808, merge=0/0, ticks=10313/7812, in_queue=18125, util=99.84% 00:23:40.145 13:41:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:40.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:40.145 13:41:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:40.145 13:41:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:23:40.145 13:41:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:23:40.145 13:41:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:40.145 13:41:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:23:40.145 13:41:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:40.145 nvmf hotplug test: fio successful as expected 00:23:40.145 13:41:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:23:40.145 13:41:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:23:40.145 13:41:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:23:40.145 13:41:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:40.145 13:41:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.145 13:41:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:40.145 13:41:50 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.145 13:41:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:23:40.145 13:41:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:23:40.145 13:41:50 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:23:40.145 13:41:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:40.145 13:41:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:23:40.145 13:41:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:40.145 13:41:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:23:40.145 13:41:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:40.145 13:41:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:40.145 rmmod nvme_tcp 00:23:40.145 rmmod nvme_fabrics 00:23:40.145 13:41:50 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:40.145 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:23:40.145 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:23:40.145 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 88702 ']' 00:23:40.145 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 88702 00:23:40.145 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 88702 ']' 00:23:40.145 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 88702 00:23:40.145 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:23:40.145 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:40.145 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88702 00:23:40.145 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:40.145 killing process with pid 88702 00:23:40.145 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:40.145 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88702' 00:23:40.145 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 88702 00:23:40.145 [2024-05-15 13:41:51.030415] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:40.145 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 88702 00:23:40.145 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:40.145 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:40.145 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:40.145 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:40.145 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:40.145 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.145 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:40.145 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.146 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:40.146 00:23:40.146 real 1m4.409s 00:23:40.146 user 3m48.367s 00:23:40.146 sys 0m25.791s 00:23:40.146 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:40.146 ************************************ 00:23:40.146 END TEST nvmf_initiator_timeout 00:23:40.146 ************************************ 00:23:40.146 13:41:51 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:40.146 13:41:51 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:23:40.146 13:41:51 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:23:40.146 13:41:51 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.146 13:41:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:40.146 13:41:51 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:23:40.146 13:41:51 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:40.146 13:41:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:40.146 13:41:51 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 1 -eq 0 ]] 00:23:40.146 13:41:51 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:40.146 13:41:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:40.146 13:41:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:40.146 13:41:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:40.146 ************************************ 00:23:40.146 START TEST nvmf_identify 00:23:40.146 ************************************ 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:40.146 * Looking for test storage... 00:23:40.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:40.146 Cannot find device "nvmf_tgt_br" 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:40.146 Cannot find device "nvmf_tgt_br2" 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:40.146 Cannot find device "nvmf_tgt_br" 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:40.146 Cannot find device "nvmf_tgt_br2" 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:40.146 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:23:40.146 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:40.147 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:40.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:23:40.147 00:23:40.147 --- 10.0.0.2 ping statistics --- 00:23:40.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.147 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:40.147 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:40.147 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:23:40.147 00:23:40.147 --- 10.0.0.3 ping statistics --- 00:23:40.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.147 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:40.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:23:40.147 00:23:40.147 --- 10.0.0.1 ping statistics --- 00:23:40.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.147 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=89626 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 89626 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 89626 ']' 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:40.147 13:41:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:40.147 [2024-05-15 13:41:51.986156] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:23:40.147 [2024-05-15 13:41:51.986496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.147 [2024-05-15 13:41:52.109640] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:40.147 [2024-05-15 13:41:52.127784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:40.147 [2024-05-15 13:41:52.179693] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.147 [2024-05-15 13:41:52.179924] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.147 [2024-05-15 13:41:52.180025] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.147 [2024-05-15 13:41:52.180075] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.147 [2024-05-15 13:41:52.180104] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.147 [2024-05-15 13:41:52.180407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:40.147 [2024-05-15 13:41:52.180596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.147 [2024-05-15 13:41:52.180663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.147 [2024-05-15 13:41:52.180665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:40.147 [2024-05-15 13:41:52.284215] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:40.147 Malloc0 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:40.147 [2024-05-15 13:41:52.393367] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:40.147 [2024-05-15 13:41:52.394033] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.147 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:40.147 [ 00:23:40.147 { 00:23:40.147 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:40.148 "subtype": "Discovery", 00:23:40.148 "listen_addresses": [ 00:23:40.148 { 00:23:40.148 "trtype": "TCP", 00:23:40.148 "adrfam": "IPv4", 00:23:40.148 "traddr": "10.0.0.2", 00:23:40.148 "trsvcid": "4420" 00:23:40.148 } 00:23:40.148 ], 00:23:40.148 "allow_any_host": true, 00:23:40.148 "hosts": [] 00:23:40.148 }, 00:23:40.148 { 00:23:40.148 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.148 "subtype": "NVMe", 00:23:40.148 "listen_addresses": [ 00:23:40.148 { 00:23:40.148 "trtype": "TCP", 00:23:40.148 "adrfam": "IPv4", 00:23:40.148 "traddr": "10.0.0.2", 00:23:40.148 "trsvcid": "4420" 00:23:40.148 } 00:23:40.148 ], 00:23:40.148 "allow_any_host": true, 00:23:40.148 "hosts": [], 00:23:40.148 "serial_number": "SPDK00000000000001", 00:23:40.148 "model_number": "SPDK bdev Controller", 00:23:40.148 "max_namespaces": 32, 00:23:40.148 "min_cntlid": 1, 00:23:40.148 "max_cntlid": 65519, 00:23:40.148 "namespaces": [ 00:23:40.148 { 00:23:40.148 "nsid": 1, 00:23:40.148 "bdev_name": "Malloc0", 00:23:40.148 "name": "Malloc0", 00:23:40.148 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:40.148 "eui64": "ABCDEF0123456789", 00:23:40.148 "uuid": "2be418de-445e-44a3-9a70-183c794ebf3f" 00:23:40.148 } 00:23:40.148 ] 00:23:40.148 } 00:23:40.148 ] 00:23:40.148 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.148 13:41:52 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:40.148 [2024-05-15 13:41:52.458530] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:23:40.148 [2024-05-15 13:41:52.458834] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89653 ] 00:23:40.148 [2024-05-15 13:41:52.583940] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:40.148 [2024-05-15 13:41:52.593316] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:40.148 [2024-05-15 13:41:52.593384] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:40.148 [2024-05-15 13:41:52.593390] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:40.148 [2024-05-15 13:41:52.593405] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:40.148 [2024-05-15 13:41:52.593420] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:23:40.148 [2024-05-15 13:41:52.593576] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:40.148 [2024-05-15 13:41:52.593624] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x10a2590 0 00:23:40.148 [2024-05-15 13:41:52.609269] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:40.148 [2024-05-15 13:41:52.609315] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:40.148 [2024-05-15 13:41:52.609337] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:40.148 [2024-05-15 13:41:52.609344] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:40.148 [2024-05-15 13:41:52.609407] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.148 [2024-05-15 13:41:52.609415] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.148 [2024-05-15 13:41:52.609421] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10a2590) 00:23:40.148 [2024-05-15 13:41:52.609451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:40.148 [2024-05-15 13:41:52.609492] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9330, cid 0, qid 0 00:23:40.148 [2024-05-15 13:41:52.626274] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.148 [2024-05-15 13:41:52.626301] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.148 [2024-05-15 13:41:52.626307] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.148 [2024-05-15 13:41:52.626313] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9330) on tqpair=0x10a2590 00:23:40.148 [2024-05-15 13:41:52.626332] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:40.148 [2024-05-15 13:41:52.626343] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:40.148 [2024-05-15 13:41:52.626351] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:40.148 [2024-05-15 13:41:52.626369] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.148 [2024-05-15 13:41:52.626375] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.148 [2024-05-15 13:41:52.626380] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10a2590) 00:23:40.148 [2024-05-15 13:41:52.626410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.148 [2024-05-15 13:41:52.626448] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9330, cid 0, qid 0 00:23:40.148 [2024-05-15 13:41:52.626544] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.148 [2024-05-15 13:41:52.626551] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.148 [2024-05-15 13:41:52.626556] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.148 [2024-05-15 13:41:52.626561] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9330) on tqpair=0x10a2590 00:23:40.148 [2024-05-15 13:41:52.626568] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:40.148 [2024-05-15 13:41:52.626577] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:40.148 [2024-05-15 13:41:52.626584] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.148 [2024-05-15 13:41:52.626589] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.148 [2024-05-15 13:41:52.626594] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10a2590) 00:23:40.148 [2024-05-15 13:41:52.626601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.148 [2024-05-15 13:41:52.626618] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9330, cid 0, qid 0 00:23:40.148 [2024-05-15 13:41:52.626660] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.148 [2024-05-15 13:41:52.626667] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.148 [2024-05-15 13:41:52.626671] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.148 [2024-05-15 13:41:52.626676] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9330) on tqpair=0x10a2590 00:23:40.148 [2024-05-15 13:41:52.626683] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:40.148 [2024-05-15 13:41:52.626693] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:40.148 [2024-05-15 13:41:52.626700] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.148 [2024-05-15 13:41:52.626705] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.148 [2024-05-15 13:41:52.626710] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10a2590) 00:23:40.148 [2024-05-15 13:41:52.626717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.148 [2024-05-15 13:41:52.626732] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9330, cid 0, qid 0 00:23:40.148 [2024-05-15 13:41:52.626842] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.148 [2024-05-15 13:41:52.626849] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.148 [2024-05-15 13:41:52.626853] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.148 [2024-05-15 13:41:52.626858] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9330) on tqpair=0x10a2590 00:23:40.148 [2024-05-15 13:41:52.626865] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:40.148 [2024-05-15 13:41:52.626875] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.148 [2024-05-15 13:41:52.626880] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.148 [2024-05-15 13:41:52.626884] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10a2590) 00:23:40.148 [2024-05-15 13:41:52.626892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.148 [2024-05-15 13:41:52.626906] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9330, cid 0, qid 0 00:23:40.148 [2024-05-15 13:41:52.626954] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.148 [2024-05-15 13:41:52.626961] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.148 [2024-05-15 13:41:52.626965] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.148 [2024-05-15 13:41:52.626970] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9330) on tqpair=0x10a2590 00:23:40.148 [2024-05-15 13:41:52.626977] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:40.148 [2024-05-15 13:41:52.626983] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:40.148 [2024-05-15 13:41:52.626991] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:40.148 [2024-05-15 13:41:52.627097] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:40.148 [2024-05-15 13:41:52.627103] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:40.148 [2024-05-15 13:41:52.627113] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.148 [2024-05-15 13:41:52.627118] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.148 [2024-05-15 13:41:52.627122] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10a2590) 00:23:40.148 [2024-05-15 13:41:52.627129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.149 [2024-05-15 13:41:52.627145] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9330, cid 0, qid 0 00:23:40.149 [2024-05-15 13:41:52.627196] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.149 [2024-05-15 13:41:52.627202] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.149 [2024-05-15 13:41:52.627207] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.627211] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9330) on tqpair=0x10a2590 00:23:40.149 [2024-05-15 13:41:52.627218] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:40.149 [2024-05-15 13:41:52.627228] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.627233] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.627238] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10a2590) 00:23:40.149 [2024-05-15 13:41:52.627245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.149 [2024-05-15 13:41:52.627270] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9330, cid 0, qid 0 00:23:40.149 [2024-05-15 13:41:52.627314] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.149 [2024-05-15 13:41:52.627324] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.149 [2024-05-15 13:41:52.627332] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.627339] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9330) on tqpair=0x10a2590 00:23:40.149 [2024-05-15 13:41:52.627350] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:40.149 [2024-05-15 13:41:52.627360] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:40.149 [2024-05-15 13:41:52.627374] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:40.149 [2024-05-15 13:41:52.627396] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:40.149 [2024-05-15 13:41:52.627409] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.627414] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10a2590) 00:23:40.149 [2024-05-15 13:41:52.627422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.149 [2024-05-15 13:41:52.627440] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9330, cid 0, qid 0 00:23:40.149 [2024-05-15 13:41:52.627525] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:40.149 [2024-05-15 13:41:52.627532] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:40.149 [2024-05-15 13:41:52.627537] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.627541] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10a2590): datao=0, datal=4096, cccid=0 00:23:40.149 [2024-05-15 13:41:52.627548] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10e9330) on tqpair(0x10a2590): expected_datao=0, payload_size=4096 00:23:40.149 [2024-05-15 13:41:52.627554] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.627563] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.627569] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.627578] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.149 [2024-05-15 13:41:52.627584] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.149 [2024-05-15 13:41:52.627589] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.627593] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9330) on tqpair=0x10a2590 00:23:40.149 [2024-05-15 13:41:52.627605] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:40.149 [2024-05-15 13:41:52.627611] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:40.149 [2024-05-15 13:41:52.627616] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:40.149 [2024-05-15 13:41:52.627623] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:40.149 [2024-05-15 13:41:52.627629] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:40.149 [2024-05-15 13:41:52.627634] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:40.149 [2024-05-15 13:41:52.627648] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:40.149 [2024-05-15 13:41:52.627659] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.627664] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.627668] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10a2590) 00:23:40.149 [2024-05-15 13:41:52.627676] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:40.149 [2024-05-15 13:41:52.627692] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9330, cid 0, qid 0 00:23:40.149 [2024-05-15 13:41:52.627740] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.149 [2024-05-15 13:41:52.627747] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.149 [2024-05-15 13:41:52.627752] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.627757] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9330) on tqpair=0x10a2590 00:23:40.149 [2024-05-15 13:41:52.627766] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.627771] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.627775] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10a2590) 00:23:40.149 [2024-05-15 13:41:52.627782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.149 [2024-05-15 13:41:52.627790] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.627794] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.627799] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x10a2590) 00:23:40.149 [2024-05-15 13:41:52.627805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.149 [2024-05-15 13:41:52.627813] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.627817] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.627822] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x10a2590) 00:23:40.149 [2024-05-15 13:41:52.627829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.149 [2024-05-15 13:41:52.627836] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.627840] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.627845] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.149 [2024-05-15 13:41:52.627851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.149 [2024-05-15 13:41:52.627857] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:40.149 [2024-05-15 13:41:52.627869] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:40.149 [2024-05-15 13:41:52.627877] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.627882] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10a2590) 00:23:40.149 [2024-05-15 13:41:52.627889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.149 [2024-05-15 13:41:52.627906] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9330, cid 0, qid 0 00:23:40.149 [2024-05-15 13:41:52.627912] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9490, cid 1, qid 0 00:23:40.149 [2024-05-15 13:41:52.627917] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e95f0, cid 2, qid 0 00:23:40.149 [2024-05-15 13:41:52.627923] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.149 [2024-05-15 13:41:52.627928] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e98b0, cid 4, qid 0 00:23:40.149 [2024-05-15 13:41:52.628006] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.149 [2024-05-15 13:41:52.628013] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.149 [2024-05-15 13:41:52.628017] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.628022] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e98b0) on tqpair=0x10a2590 00:23:40.149 [2024-05-15 13:41:52.628029] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:40.149 [2024-05-15 13:41:52.628035] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:40.149 [2024-05-15 13:41:52.628046] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.628051] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10a2590) 00:23:40.149 [2024-05-15 13:41:52.628058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.149 [2024-05-15 13:41:52.628073] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e98b0, cid 4, qid 0 00:23:40.149 [2024-05-15 13:41:52.628119] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:40.149 [2024-05-15 13:41:52.628126] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:40.149 [2024-05-15 13:41:52.628130] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:40.149 [2024-05-15 13:41:52.628135] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10a2590): datao=0, datal=4096, cccid=4 00:23:40.149 [2024-05-15 13:41:52.628141] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10e98b0) on tqpair(0x10a2590): expected_datao=0, payload_size=4096 00:23:40.150 [2024-05-15 13:41:52.628146] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.150 [2024-05-15 13:41:52.628153] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:40.150 [2024-05-15 13:41:52.628158] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:40.150 [2024-05-15 13:41:52.628167] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.150 [2024-05-15 13:41:52.628173] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.150 [2024-05-15 13:41:52.628178] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.150 [2024-05-15 13:41:52.628182] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e98b0) on tqpair=0x10a2590 00:23:40.150 [2024-05-15 13:41:52.628198] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:40.150 [2024-05-15 13:41:52.628231] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.150 [2024-05-15 13:41:52.628252] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10a2590) 00:23:40.150 [2024-05-15 13:41:52.628265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.150 [2024-05-15 13:41:52.628279] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.150 [2024-05-15 13:41:52.628287] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.150 [2024-05-15 13:41:52.628296] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10a2590) 00:23:40.150 [2024-05-15 13:41:52.628307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.150 [2024-05-15 13:41:52.628341] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e98b0, cid 4, qid 0 00:23:40.150 [2024-05-15 13:41:52.628350] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9a10, cid 5, qid 0 00:23:40.150 [2024-05-15 13:41:52.628459] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:40.150 [2024-05-15 13:41:52.628473] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:40.150 [2024-05-15 13:41:52.628480] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:40.150 [2024-05-15 13:41:52.628487] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10a2590): datao=0, datal=1024, cccid=4 00:23:40.150 [2024-05-15 13:41:52.628497] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10e98b0) on tqpair(0x10a2590): expected_datao=0, payload_size=1024 00:23:40.150 [2024-05-15 13:41:52.628506] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.150 [2024-05-15 13:41:52.628517] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:40.150 [2024-05-15 13:41:52.628524] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:40.150 [2024-05-15 13:41:52.628534] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.150 [2024-05-15 13:41:52.628544] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.150 [2024-05-15 13:41:52.628552] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.150 [2024-05-15 13:41:52.628560] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9a10) on tqpair=0x10a2590 00:23:40.150 [2024-05-15 13:41:52.628589] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.150 [2024-05-15 13:41:52.628600] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.150 [2024-05-15 13:41:52.628607] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.150 [2024-05-15 13:41:52.628615] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e98b0) on tqpair=0x10a2590 00:23:40.150 [2024-05-15 13:41:52.628637] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.150 [2024-05-15 13:41:52.628644] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10a2590) 00:23:40.150 [2024-05-15 13:41:52.628657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.150 [2024-05-15 13:41:52.628686] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e98b0, cid 4, qid 0 00:23:40.150 [2024-05-15 13:41:52.628745] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:40.150 [2024-05-15 13:41:52.628756] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:40.150 [2024-05-15 13:41:52.628763] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:40.150 [2024-05-15 13:41:52.628771] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10a2590): datao=0, datal=3072, cccid=4 00:23:40.150 [2024-05-15 13:41:52.628781] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10e98b0) on tqpair(0x10a2590): expected_datao=0, payload_size=3072 00:23:40.150 [2024-05-15 13:41:52.628790] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.150 [2024-05-15 13:41:52.628802] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:40.150 [2024-05-15 13:41:52.628809] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:40.150 [2024-05-15 13:41:52.628822] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.150 [2024-05-15 13:41:52.628832] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.150 [2024-05-15 13:41:52.628839] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.150 [2024-05-15 13:41:52.628846] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e98b0) on tqpair=0x10a2590 00:23:40.150 [2024-05-15 13:41:52.628863] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.150 [2024-05-15 13:41:52.628871] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10a2590) 00:23:40.150 [2024-05-15 13:41:52.628883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.150 [2024-05-15 13:41:52.628913] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e98b0, cid 4, qid 0 00:23:40.150 [2024-05-15 13:41:52.628964] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:40.150 [2024-05-15 13:41:52.628972] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:40.150 [2024-05-15 13:41:52.628977] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:40.150 [2024-05-15 13:41:52.628983] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10a2590): datao=0, datal=8, cccid=4 00:23:40.150 ===================================================== 00:23:40.150 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:40.150 ===================================================== 00:23:40.150 Controller Capabilities/Features 00:23:40.150 ================================ 00:23:40.150 Vendor ID: 0000 00:23:40.150 Subsystem Vendor ID: 0000 00:23:40.150 Serial Number: .................... 00:23:40.150 Model Number: ........................................ 00:23:40.150 Firmware Version: 24.05 00:23:40.150 Recommended Arb Burst: 0 00:23:40.150 IEEE OUI Identifier: 00 00 00 00:23:40.150 Multi-path I/O 00:23:40.150 May have multiple subsystem ports: No 00:23:40.150 May have multiple controllers: No 00:23:40.150 Associated with SR-IOV VF: No 00:23:40.150 Max Data Transfer Size: 131072 00:23:40.150 Max Number of Namespaces: 0 00:23:40.150 Max Number of I/O Queues: 1024 00:23:40.150 NVMe Specification Version (VS): 1.3 00:23:40.150 NVMe Specification Version (Identify): 1.3 00:23:40.150 Maximum Queue Entries: 128 00:23:40.150 Contiguous Queues Required: Yes 00:23:40.150 Arbitration Mechanisms Supported 00:23:40.150 Weighted Round Robin: Not Supported 00:23:40.150 Vendor Specific: Not Supported 00:23:40.150 Reset Timeout: 15000 ms 00:23:40.150 Doorbell Stride: 4 bytes 00:23:40.150 NVM Subsystem Reset: Not Supported 00:23:40.150 Command Sets Supported 00:23:40.150 NVM Command Set: Supported 00:23:40.150 Boot Partition: Not Supported 00:23:40.150 Memory Page Size Minimum: 4096 bytes 00:23:40.150 Memory Page Size Maximum: 4096 bytes 00:23:40.150 Persistent Memory Region: Not Supported 00:23:40.150 Optional Asynchronous Events Supported 00:23:40.150 Namespace Attribute Notices: Not Supported 00:23:40.150 Firmware Activation Notices: Not Supported 00:23:40.150 ANA Change Notices: Not Supported 00:23:40.150 PLE Aggregate Log Change Notices: Not Supported 00:23:40.150 LBA Status Info Alert Notices: Not Supported 00:23:40.150 EGE Aggregate Log Change Notices: Not Supported 00:23:40.150 Normal NVM Subsystem Shutdown event: Not Supported 00:23:40.150 Zone Descriptor Change Notices: Not Supported 00:23:40.150 Discovery Log Change Notices: Supported 00:23:40.150 Controller Attributes 00:23:40.150 128-bit Host Identifier: Not Supported 00:23:40.150 Non-Operational Permissive Mode: Not Supported 00:23:40.150 NVM Sets: Not Supported 00:23:40.150 Read Recovery Levels: Not Supported 00:23:40.150 Endurance Groups: Not Supported 00:23:40.150 Predictable Latency Mode: Not Supported 00:23:40.151 Traffic Based Keep ALive: Not Supported 00:23:40.151 Namespace Granularity: Not Supported 00:23:40.151 SQ Associations: Not Supported 00:23:40.151 UUID List: Not Supported 00:23:40.151 Multi-Domain Subsystem: Not Supported 00:23:40.151 Fixed Capacity Management: Not Supported 00:23:40.151 Variable Capacity Management: Not Supported 00:23:40.151 Delete Endurance Group: Not Supported 00:23:40.151 Delete NVM Set: Not Supported 00:23:40.151 Extended LBA Formats Supported: Not Supported 00:23:40.151 Flexible Data Placement Supported: Not Supported 00:23:40.151 00:23:40.151 Controller Memory Buffer Support 00:23:40.151 ================================ 00:23:40.151 Supported: No 00:23:40.151 00:23:40.151 Persistent Memory Region Support 00:23:40.151 ================================ 00:23:40.151 Supported: No 00:23:40.151 00:23:40.151 Admin Command Set Attributes 00:23:40.151 ============================ 00:23:40.151 Security Send/Receive: Not Supported 00:23:40.151 Format NVM: Not Supported 00:23:40.151 Firmware Activate/Download: Not Supported 00:23:40.151 Namespace Management: Not Supported 00:23:40.151 Device Self-Test: Not Supported 00:23:40.151 Directives: Not Supported 00:23:40.151 NVMe-MI: Not Supported 00:23:40.151 Virtualization Management: Not Supported 00:23:40.151 Doorbell Buffer Config: Not Supported 00:23:40.151 Get LBA Status Capability: Not Supported 00:23:40.151 Command & Feature Lockdown Capability: Not Supported 00:23:40.151 Abort Command Limit: 1 00:23:40.151 Async Event Request Limit: 4 00:23:40.151 Number of Firmware Slots: N/A 00:23:40.151 Firmware Slot 1 Read-Only: N/A 00:23:40.151 Firm[2024-05-15 13:41:52.628990] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10e98b0) on tqpair(0x10a2590): expected_datao=0, payload_size=8 00:23:40.151 [2024-05-15 13:41:52.628998] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.151 [2024-05-15 13:41:52.629006] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:40.151 [2024-05-15 13:41:52.629012] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:40.151 [2024-05-15 13:41:52.629027] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.151 [2024-05-15 13:41:52.629035] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.151 [2024-05-15 13:41:52.629041] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.151 [2024-05-15 13:41:52.629047] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e98b0) on tqpair=0x10a2590 00:23:40.151 ware Activation Without Reset: N/A 00:23:40.151 Multiple Update Detection Support: N/A 00:23:40.151 Firmware Update Granularity: No Information Provided 00:23:40.151 Per-Namespace SMART Log: No 00:23:40.151 Asymmetric Namespace Access Log Page: Not Supported 00:23:40.151 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:40.151 Command Effects Log Page: Not Supported 00:23:40.151 Get Log Page Extended Data: Supported 00:23:40.151 Telemetry Log Pages: Not Supported 00:23:40.151 Persistent Event Log Pages: Not Supported 00:23:40.151 Supported Log Pages Log Page: May Support 00:23:40.151 Commands Supported & Effects Log Page: Not Supported 00:23:40.151 Feature Identifiers & Effects Log Page:May Support 00:23:40.151 NVMe-MI Commands & Effects Log Page: May Support 00:23:40.151 Data Area 4 for Telemetry Log: Not Supported 00:23:40.151 Error Log Page Entries Supported: 128 00:23:40.151 Keep Alive: Not Supported 00:23:40.151 00:23:40.151 NVM Command Set Attributes 00:23:40.151 ========================== 00:23:40.151 Submission Queue Entry Size 00:23:40.151 Max: 1 00:23:40.151 Min: 1 00:23:40.151 Completion Queue Entry Size 00:23:40.151 Max: 1 00:23:40.151 Min: 1 00:23:40.151 Number of Namespaces: 0 00:23:40.151 Compare Command: Not Supported 00:23:40.151 Write Uncorrectable Command: Not Supported 00:23:40.151 Dataset Management Command: Not Supported 00:23:40.151 Write Zeroes Command: Not Supported 00:23:40.151 Set Features Save Field: Not Supported 00:23:40.151 Reservations: Not Supported 00:23:40.151 Timestamp: Not Supported 00:23:40.151 Copy: Not Supported 00:23:40.151 Volatile Write Cache: Not Present 00:23:40.151 Atomic Write Unit (Normal): 1 00:23:40.151 Atomic Write Unit (PFail): 1 00:23:40.151 Atomic Compare & Write Unit: 1 00:23:40.151 Fused Compare & Write: Supported 00:23:40.151 Scatter-Gather List 00:23:40.151 SGL Command Set: Supported 00:23:40.151 SGL Keyed: Supported 00:23:40.151 SGL Bit Bucket Descriptor: Not Supported 00:23:40.151 SGL Metadata Pointer: Not Supported 00:23:40.151 Oversized SGL: Not Supported 00:23:40.151 SGL Metadata Address: Not Supported 00:23:40.151 SGL Offset: Supported 00:23:40.151 Transport SGL Data Block: Not Supported 00:23:40.151 Replay Protected Memory Block: Not Supported 00:23:40.151 00:23:40.151 Firmware Slot Information 00:23:40.151 ========================= 00:23:40.151 Active slot: 0 00:23:40.151 00:23:40.151 00:23:40.151 Error Log 00:23:40.151 ========= 00:23:40.151 00:23:40.151 Active Namespaces 00:23:40.151 ================= 00:23:40.151 Discovery Log Page 00:23:40.151 ================== 00:23:40.151 Generation Counter: 2 00:23:40.151 Number of Records: 2 00:23:40.151 Record Format: 0 00:23:40.151 00:23:40.151 Discovery Log Entry 0 00:23:40.151 ---------------------- 00:23:40.151 Transport Type: 3 (TCP) 00:23:40.151 Address Family: 1 (IPv4) 00:23:40.151 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:40.151 Entry Flags: 00:23:40.151 Duplicate Returned Information: 1 00:23:40.151 Explicit Persistent Connection Support for Discovery: 1 00:23:40.151 Transport Requirements: 00:23:40.151 Secure Channel: Not Required 00:23:40.151 Port ID: 0 (0x0000) 00:23:40.151 Controller ID: 65535 (0xffff) 00:23:40.151 Admin Max SQ Size: 128 00:23:40.151 Transport Service Identifier: 4420 00:23:40.151 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:40.151 Transport Address: 10.0.0.2 00:23:40.151 Discovery Log Entry 1 00:23:40.151 ---------------------- 00:23:40.151 Transport Type: 3 (TCP) 00:23:40.151 Address Family: 1 (IPv4) 00:23:40.151 Subsystem Type: 2 (NVM Subsystem) 00:23:40.151 Entry Flags: 00:23:40.151 Duplicate Returned Information: 0 00:23:40.151 Explicit Persistent Connection Support for Discovery: 0 00:23:40.151 Transport Requirements: 00:23:40.151 Secure Channel: Not Required 00:23:40.151 Port ID: 0 (0x0000) 00:23:40.151 Controller ID: 65535 (0xffff) 00:23:40.151 Admin Max SQ Size: 128 00:23:40.151 Transport Service Identifier: 4420 00:23:40.151 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:40.151 Transport Address: 10.0.0.2 [2024-05-15 13:41:52.629189] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:40.151 [2024-05-15 13:41:52.629209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.151 [2024-05-15 13:41:52.629219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.151 [2024-05-15 13:41:52.629228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.151 [2024-05-15 13:41:52.629253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.151 [2024-05-15 13:41:52.629268] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.151 [2024-05-15 13:41:52.629275] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.151 [2024-05-15 13:41:52.629281] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.151 [2024-05-15 13:41:52.629291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.151 [2024-05-15 13:41:52.629314] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.151 [2024-05-15 13:41:52.629365] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.151 [2024-05-15 13:41:52.629373] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.151 [2024-05-15 13:41:52.629378] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.151 [2024-05-15 13:41:52.629385] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.151 [2024-05-15 13:41:52.629395] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.151 [2024-05-15 13:41:52.629401] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.151 [2024-05-15 13:41:52.629406] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.151 [2024-05-15 13:41:52.629415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.151 [2024-05-15 13:41:52.629446] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.151 [2024-05-15 13:41:52.629504] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.151 [2024-05-15 13:41:52.629513] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.151 [2024-05-15 13:41:52.629518] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.151 [2024-05-15 13:41:52.629524] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.151 [2024-05-15 13:41:52.629532] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:40.151 [2024-05-15 13:41:52.629539] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:40.151 [2024-05-15 13:41:52.629551] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.151 [2024-05-15 13:41:52.629557] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.151 [2024-05-15 13:41:52.629563] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.151 [2024-05-15 13:41:52.629572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.152 [2024-05-15 13:41:52.629589] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.152 [2024-05-15 13:41:52.629659] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.152 [2024-05-15 13:41:52.629667] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.152 [2024-05-15 13:41:52.629672] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.629678] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.152 [2024-05-15 13:41:52.629691] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.629697] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.629703] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.152 [2024-05-15 13:41:52.629712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.152 [2024-05-15 13:41:52.629728] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.152 [2024-05-15 13:41:52.629785] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.152 [2024-05-15 13:41:52.629793] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.152 [2024-05-15 13:41:52.629799] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.629805] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.152 [2024-05-15 13:41:52.629817] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.629823] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.629829] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.152 [2024-05-15 13:41:52.629838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.152 [2024-05-15 13:41:52.629854] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.152 [2024-05-15 13:41:52.629942] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.152 [2024-05-15 13:41:52.629950] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.152 [2024-05-15 13:41:52.629955] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.629961] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.152 [2024-05-15 13:41:52.629974] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.629980] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.629985] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.152 [2024-05-15 13:41:52.629994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.152 [2024-05-15 13:41:52.630010] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.152 [2024-05-15 13:41:52.630064] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.152 [2024-05-15 13:41:52.630072] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.152 [2024-05-15 13:41:52.630077] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.630083] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.152 [2024-05-15 13:41:52.630096] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.630102] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.630108] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.152 [2024-05-15 13:41:52.630116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.152 [2024-05-15 13:41:52.630133] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.152 [2024-05-15 13:41:52.630186] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.152 [2024-05-15 13:41:52.630194] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.152 [2024-05-15 13:41:52.630200] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.630206] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.152 [2024-05-15 13:41:52.630218] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.630224] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.630230] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.152 [2024-05-15 13:41:52.630254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.152 [2024-05-15 13:41:52.630274] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.152 [2024-05-15 13:41:52.630328] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.152 [2024-05-15 13:41:52.630335] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.152 [2024-05-15 13:41:52.630340] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.630344] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.152 [2024-05-15 13:41:52.630356] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.630361] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.630365] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.152 [2024-05-15 13:41:52.630372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.152 [2024-05-15 13:41:52.630387] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.152 [2024-05-15 13:41:52.630448] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.152 [2024-05-15 13:41:52.630455] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.152 [2024-05-15 13:41:52.630459] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.630464] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.152 [2024-05-15 13:41:52.630475] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.630480] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.630484] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.152 [2024-05-15 13:41:52.630492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.152 [2024-05-15 13:41:52.630506] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.152 [2024-05-15 13:41:52.630564] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.152 [2024-05-15 13:41:52.630571] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.152 [2024-05-15 13:41:52.630575] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.630580] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.152 [2024-05-15 13:41:52.630591] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.630596] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.630612] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.152 [2024-05-15 13:41:52.630619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.152 [2024-05-15 13:41:52.630634] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.152 [2024-05-15 13:41:52.630689] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.152 [2024-05-15 13:41:52.630695] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.152 [2024-05-15 13:41:52.630699] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.630703] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.152 [2024-05-15 13:41:52.630714] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.630718] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.630723] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.152 [2024-05-15 13:41:52.630729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.152 [2024-05-15 13:41:52.630743] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.152 [2024-05-15 13:41:52.630799] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.152 [2024-05-15 13:41:52.630805] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.152 [2024-05-15 13:41:52.630809] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.630813] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.152 [2024-05-15 13:41:52.630824] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.630828] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.630832] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.152 [2024-05-15 13:41:52.630839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.152 [2024-05-15 13:41:52.630853] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.152 [2024-05-15 13:41:52.630905] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.152 [2024-05-15 13:41:52.630911] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.152 [2024-05-15 13:41:52.630915] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.630920] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.152 [2024-05-15 13:41:52.630930] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.630935] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.630939] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.152 [2024-05-15 13:41:52.630946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.152 [2024-05-15 13:41:52.630960] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.152 [2024-05-15 13:41:52.631011] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.152 [2024-05-15 13:41:52.631018] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.152 [2024-05-15 13:41:52.631022] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.631026] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.152 [2024-05-15 13:41:52.631036] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.631041] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.152 [2024-05-15 13:41:52.631045] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.152 [2024-05-15 13:41:52.631052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.152 [2024-05-15 13:41:52.631065] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.152 [2024-05-15 13:41:52.631113] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.152 [2024-05-15 13:41:52.631119] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.152 [2024-05-15 13:41:52.631123] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631128] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.153 [2024-05-15 13:41:52.631139] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631143] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631147] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.153 [2024-05-15 13:41:52.631154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.153 [2024-05-15 13:41:52.631168] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.153 [2024-05-15 13:41:52.631211] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.153 [2024-05-15 13:41:52.631218] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.153 [2024-05-15 13:41:52.631222] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631226] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.153 [2024-05-15 13:41:52.631237] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631245] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631272] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.153 [2024-05-15 13:41:52.631279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.153 [2024-05-15 13:41:52.631295] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.153 [2024-05-15 13:41:52.631342] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.153 [2024-05-15 13:41:52.631348] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.153 [2024-05-15 13:41:52.631352] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631357] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.153 [2024-05-15 13:41:52.631367] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631372] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631376] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.153 [2024-05-15 13:41:52.631383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.153 [2024-05-15 13:41:52.631397] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.153 [2024-05-15 13:41:52.631437] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.153 [2024-05-15 13:41:52.631443] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.153 [2024-05-15 13:41:52.631447] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631452] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.153 [2024-05-15 13:41:52.631462] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631467] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631471] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.153 [2024-05-15 13:41:52.631477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.153 [2024-05-15 13:41:52.631491] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.153 [2024-05-15 13:41:52.631534] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.153 [2024-05-15 13:41:52.631540] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.153 [2024-05-15 13:41:52.631544] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631549] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.153 [2024-05-15 13:41:52.631559] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631564] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631568] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.153 [2024-05-15 13:41:52.631575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.153 [2024-05-15 13:41:52.631588] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.153 [2024-05-15 13:41:52.631626] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.153 [2024-05-15 13:41:52.631632] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.153 [2024-05-15 13:41:52.631636] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631640] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.153 [2024-05-15 13:41:52.631651] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631655] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631660] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.153 [2024-05-15 13:41:52.631666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.153 [2024-05-15 13:41:52.631680] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.153 [2024-05-15 13:41:52.631718] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.153 [2024-05-15 13:41:52.631724] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.153 [2024-05-15 13:41:52.631728] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631732] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.153 [2024-05-15 13:41:52.631743] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631747] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631751] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.153 [2024-05-15 13:41:52.631758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.153 [2024-05-15 13:41:52.631772] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.153 [2024-05-15 13:41:52.631815] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.153 [2024-05-15 13:41:52.631821] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.153 [2024-05-15 13:41:52.631825] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631829] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.153 [2024-05-15 13:41:52.631840] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631844] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631849] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.153 [2024-05-15 13:41:52.631855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.153 [2024-05-15 13:41:52.631869] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.153 [2024-05-15 13:41:52.631906] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.153 [2024-05-15 13:41:52.631913] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.153 [2024-05-15 13:41:52.631917] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631921] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.153 [2024-05-15 13:41:52.631931] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631936] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.631940] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.153 [2024-05-15 13:41:52.631947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.153 [2024-05-15 13:41:52.631960] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.153 [2024-05-15 13:41:52.632001] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.153 [2024-05-15 13:41:52.632007] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.153 [2024-05-15 13:41:52.632012] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.632017] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.153 [2024-05-15 13:41:52.632027] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.632032] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.632036] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.153 [2024-05-15 13:41:52.632043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.153 [2024-05-15 13:41:52.632056] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.153 [2024-05-15 13:41:52.632100] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.153 [2024-05-15 13:41:52.632106] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.153 [2024-05-15 13:41:52.632110] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.632114] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.153 [2024-05-15 13:41:52.632125] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.632129] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.632133] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.153 [2024-05-15 13:41:52.632140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.153 [2024-05-15 13:41:52.632154] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.153 [2024-05-15 13:41:52.632198] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.153 [2024-05-15 13:41:52.632204] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.153 [2024-05-15 13:41:52.632208] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.632213] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.153 [2024-05-15 13:41:52.632223] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.632227] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.632232] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.153 [2024-05-15 13:41:52.632253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.153 [2024-05-15 13:41:52.632269] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.153 [2024-05-15 13:41:52.632314] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.153 [2024-05-15 13:41:52.632320] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.153 [2024-05-15 13:41:52.632324] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.632329] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.153 [2024-05-15 13:41:52.632339] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.153 [2024-05-15 13:41:52.632344] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.632348] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.154 [2024-05-15 13:41:52.632355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.154 [2024-05-15 13:41:52.632369] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.154 [2024-05-15 13:41:52.632406] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.154 [2024-05-15 13:41:52.632413] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.154 [2024-05-15 13:41:52.632417] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.632421] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.154 [2024-05-15 13:41:52.632432] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.632436] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.632441] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.154 [2024-05-15 13:41:52.632447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.154 [2024-05-15 13:41:52.632461] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.154 [2024-05-15 13:41:52.632504] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.154 [2024-05-15 13:41:52.632510] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.154 [2024-05-15 13:41:52.632515] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.632519] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.154 [2024-05-15 13:41:52.632529] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.632534] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.632538] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.154 [2024-05-15 13:41:52.632545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.154 [2024-05-15 13:41:52.632559] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.154 [2024-05-15 13:41:52.632607] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.154 [2024-05-15 13:41:52.632614] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.154 [2024-05-15 13:41:52.632618] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.632622] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.154 [2024-05-15 13:41:52.632633] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.632637] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.632641] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.154 [2024-05-15 13:41:52.632648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.154 [2024-05-15 13:41:52.632662] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.154 [2024-05-15 13:41:52.632714] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.154 [2024-05-15 13:41:52.632721] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.154 [2024-05-15 13:41:52.632726] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.632730] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.154 [2024-05-15 13:41:52.632741] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.632746] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.632751] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.154 [2024-05-15 13:41:52.632758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.154 [2024-05-15 13:41:52.632773] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.154 [2024-05-15 13:41:52.632813] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.154 [2024-05-15 13:41:52.632820] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.154 [2024-05-15 13:41:52.632824] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.632829] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.154 [2024-05-15 13:41:52.632840] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.632845] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.632849] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.154 [2024-05-15 13:41:52.632856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.154 [2024-05-15 13:41:52.632871] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.154 [2024-05-15 13:41:52.632917] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.154 [2024-05-15 13:41:52.632923] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.154 [2024-05-15 13:41:52.632928] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.632933] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.154 [2024-05-15 13:41:52.632944] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.632949] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.632953] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.154 [2024-05-15 13:41:52.632960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.154 [2024-05-15 13:41:52.632975] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.154 [2024-05-15 13:41:52.633021] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.154 [2024-05-15 13:41:52.633028] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.154 [2024-05-15 13:41:52.633039] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.633044] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.154 [2024-05-15 13:41:52.633055] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.633060] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.633064] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.154 [2024-05-15 13:41:52.633071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.154 [2024-05-15 13:41:52.633086] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.154 [2024-05-15 13:41:52.633132] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.154 [2024-05-15 13:41:52.633139] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.154 [2024-05-15 13:41:52.633143] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.633148] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.154 [2024-05-15 13:41:52.633159] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.633164] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.633168] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.154 [2024-05-15 13:41:52.633176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.154 [2024-05-15 13:41:52.633190] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.154 [2024-05-15 13:41:52.633234] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.154 [2024-05-15 13:41:52.633244] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.154 [2024-05-15 13:41:52.633261] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.633268] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.154 [2024-05-15 13:41:52.633281] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.633287] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.633293] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.154 [2024-05-15 13:41:52.633302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.154 [2024-05-15 13:41:52.633320] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.154 [2024-05-15 13:41:52.633363] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.154 [2024-05-15 13:41:52.633371] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.154 [2024-05-15 13:41:52.633376] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.633382] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.154 [2024-05-15 13:41:52.633394] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.633401] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.154 [2024-05-15 13:41:52.633406] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.154 [2024-05-15 13:41:52.633415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.155 [2024-05-15 13:41:52.633440] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.155 [2024-05-15 13:41:52.633487] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.155 [2024-05-15 13:41:52.633495] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.155 [2024-05-15 13:41:52.633501] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.633507] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.155 [2024-05-15 13:41:52.633519] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.633525] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.633531] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.155 [2024-05-15 13:41:52.633540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.155 [2024-05-15 13:41:52.633557] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.155 [2024-05-15 13:41:52.633598] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.155 [2024-05-15 13:41:52.633606] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.155 [2024-05-15 13:41:52.633612] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.633618] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.155 [2024-05-15 13:41:52.633630] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.633636] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.633642] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.155 [2024-05-15 13:41:52.633651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.155 [2024-05-15 13:41:52.633667] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.155 [2024-05-15 13:41:52.633715] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.155 [2024-05-15 13:41:52.633723] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.155 [2024-05-15 13:41:52.633729] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.633735] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.155 [2024-05-15 13:41:52.633747] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.633753] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.633759] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.155 [2024-05-15 13:41:52.633768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.155 [2024-05-15 13:41:52.633784] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.155 [2024-05-15 13:41:52.633829] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.155 [2024-05-15 13:41:52.633837] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.155 [2024-05-15 13:41:52.633842] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.633848] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.155 [2024-05-15 13:41:52.633861] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.633867] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.633872] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.155 [2024-05-15 13:41:52.633881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.155 [2024-05-15 13:41:52.633898] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.155 [2024-05-15 13:41:52.633939] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.155 [2024-05-15 13:41:52.633948] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.155 [2024-05-15 13:41:52.633953] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.633959] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.155 [2024-05-15 13:41:52.633972] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.633978] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.633983] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.155 [2024-05-15 13:41:52.633992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.155 [2024-05-15 13:41:52.634008] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.155 [2024-05-15 13:41:52.634050] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.155 [2024-05-15 13:41:52.634063] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.155 [2024-05-15 13:41:52.634069] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.634076] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.155 [2024-05-15 13:41:52.634088] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.634094] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.634100] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.155 [2024-05-15 13:41:52.634109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.155 [2024-05-15 13:41:52.634126] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.155 [2024-05-15 13:41:52.634167] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.155 [2024-05-15 13:41:52.634175] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.155 [2024-05-15 13:41:52.634181] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.634187] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.155 [2024-05-15 13:41:52.634200] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.634206] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.634211] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.155 [2024-05-15 13:41:52.634220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.155 [2024-05-15 13:41:52.634249] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.155 [2024-05-15 13:41:52.634299] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.155 [2024-05-15 13:41:52.634306] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.155 [2024-05-15 13:41:52.634311] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.634316] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.155 [2024-05-15 13:41:52.634327] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.634332] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.634336] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.155 [2024-05-15 13:41:52.634344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.155 [2024-05-15 13:41:52.634360] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.155 [2024-05-15 13:41:52.634401] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.155 [2024-05-15 13:41:52.634408] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.155 [2024-05-15 13:41:52.634412] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.634417] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.155 [2024-05-15 13:41:52.634428] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.634433] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.634437] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.155 [2024-05-15 13:41:52.634444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.155 [2024-05-15 13:41:52.634459] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.155 [2024-05-15 13:41:52.634512] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.155 [2024-05-15 13:41:52.634519] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.155 [2024-05-15 13:41:52.634523] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.634527] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.155 [2024-05-15 13:41:52.634537] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.634542] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.634546] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.155 [2024-05-15 13:41:52.634553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.155 [2024-05-15 13:41:52.634567] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.155 [2024-05-15 13:41:52.634610] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.155 [2024-05-15 13:41:52.634616] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.155 [2024-05-15 13:41:52.634620] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.634625] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.155 [2024-05-15 13:41:52.634635] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.634640] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.634644] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.155 [2024-05-15 13:41:52.634651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.155 [2024-05-15 13:41:52.634665] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.155 [2024-05-15 13:41:52.634708] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.155 [2024-05-15 13:41:52.634714] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.155 [2024-05-15 13:41:52.634718] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.634723] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.155 [2024-05-15 13:41:52.634733] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.634738] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.155 [2024-05-15 13:41:52.634742] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.155 [2024-05-15 13:41:52.634749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.155 [2024-05-15 13:41:52.634762] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.155 [2024-05-15 13:41:52.634803] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.155 [2024-05-15 13:41:52.634809] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.155 [2024-05-15 13:41:52.634813] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.634818] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.156 [2024-05-15 13:41:52.634828] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.634833] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.634837] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.156 [2024-05-15 13:41:52.634844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.156 [2024-05-15 13:41:52.634858] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.156 [2024-05-15 13:41:52.634896] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.156 [2024-05-15 13:41:52.634902] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.156 [2024-05-15 13:41:52.634906] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.634911] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.156 [2024-05-15 13:41:52.634921] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.634926] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.634930] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.156 [2024-05-15 13:41:52.634937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.156 [2024-05-15 13:41:52.634950] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.156 [2024-05-15 13:41:52.634994] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.156 [2024-05-15 13:41:52.635007] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.156 [2024-05-15 13:41:52.635012] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635016] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.156 [2024-05-15 13:41:52.635027] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635031] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635036] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.156 [2024-05-15 13:41:52.635043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.156 [2024-05-15 13:41:52.635057] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.156 [2024-05-15 13:41:52.635097] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.156 [2024-05-15 13:41:52.635103] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.156 [2024-05-15 13:41:52.635108] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635112] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.156 [2024-05-15 13:41:52.635122] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635127] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635131] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.156 [2024-05-15 13:41:52.635138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.156 [2024-05-15 13:41:52.635152] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.156 [2024-05-15 13:41:52.635192] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.156 [2024-05-15 13:41:52.635198] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.156 [2024-05-15 13:41:52.635202] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635207] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.156 [2024-05-15 13:41:52.635217] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635222] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635226] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.156 [2024-05-15 13:41:52.635233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.156 [2024-05-15 13:41:52.635271] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.156 [2024-05-15 13:41:52.635311] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.156 [2024-05-15 13:41:52.635317] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.156 [2024-05-15 13:41:52.635321] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635326] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.156 [2024-05-15 13:41:52.635336] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635341] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635345] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.156 [2024-05-15 13:41:52.635352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.156 [2024-05-15 13:41:52.635366] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.156 [2024-05-15 13:41:52.635404] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.156 [2024-05-15 13:41:52.635410] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.156 [2024-05-15 13:41:52.635414] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635419] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.156 [2024-05-15 13:41:52.635429] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635434] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635438] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.156 [2024-05-15 13:41:52.635445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.156 [2024-05-15 13:41:52.635459] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.156 [2024-05-15 13:41:52.635502] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.156 [2024-05-15 13:41:52.635509] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.156 [2024-05-15 13:41:52.635513] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635517] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.156 [2024-05-15 13:41:52.635528] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635532] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635537] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.156 [2024-05-15 13:41:52.635543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.156 [2024-05-15 13:41:52.635557] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.156 [2024-05-15 13:41:52.635600] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.156 [2024-05-15 13:41:52.635607] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.156 [2024-05-15 13:41:52.635611] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635615] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.156 [2024-05-15 13:41:52.635626] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635630] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635635] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.156 [2024-05-15 13:41:52.635641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.156 [2024-05-15 13:41:52.635655] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.156 [2024-05-15 13:41:52.635693] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.156 [2024-05-15 13:41:52.635699] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.156 [2024-05-15 13:41:52.635703] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635708] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.156 [2024-05-15 13:41:52.635718] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635723] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635727] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.156 [2024-05-15 13:41:52.635734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.156 [2024-05-15 13:41:52.635748] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.156 [2024-05-15 13:41:52.635791] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.156 [2024-05-15 13:41:52.635797] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.156 [2024-05-15 13:41:52.635801] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635805] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.156 [2024-05-15 13:41:52.635816] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635820] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635825] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.156 [2024-05-15 13:41:52.635832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.156 [2024-05-15 13:41:52.635845] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.156 [2024-05-15 13:41:52.635886] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.156 [2024-05-15 13:41:52.635892] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.156 [2024-05-15 13:41:52.635896] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635901] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.156 [2024-05-15 13:41:52.635911] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635916] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.635920] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.156 [2024-05-15 13:41:52.635927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.156 [2024-05-15 13:41:52.635940] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.156 [2024-05-15 13:41:52.636002] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.156 [2024-05-15 13:41:52.636009] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.156 [2024-05-15 13:41:52.636013] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.156 [2024-05-15 13:41:52.636018] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.156 [2024-05-15 13:41:52.636029] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636034] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636038] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.157 [2024-05-15 13:41:52.636045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.157 [2024-05-15 13:41:52.636060] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.157 [2024-05-15 13:41:52.636098] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.157 [2024-05-15 13:41:52.636105] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.157 [2024-05-15 13:41:52.636109] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636114] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.157 [2024-05-15 13:41:52.636125] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636130] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636134] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.157 [2024-05-15 13:41:52.636142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.157 [2024-05-15 13:41:52.636157] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.157 [2024-05-15 13:41:52.636203] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.157 [2024-05-15 13:41:52.636210] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.157 [2024-05-15 13:41:52.636214] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636219] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.157 [2024-05-15 13:41:52.636230] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636235] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636242] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.157 [2024-05-15 13:41:52.636264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.157 [2024-05-15 13:41:52.636282] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.157 [2024-05-15 13:41:52.636322] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.157 [2024-05-15 13:41:52.636329] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.157 [2024-05-15 13:41:52.636333] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636338] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.157 [2024-05-15 13:41:52.636349] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636354] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636359] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.157 [2024-05-15 13:41:52.636366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.157 [2024-05-15 13:41:52.636381] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.157 [2024-05-15 13:41:52.636426] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.157 [2024-05-15 13:41:52.636437] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.157 [2024-05-15 13:41:52.636441] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636446] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.157 [2024-05-15 13:41:52.636457] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636462] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636467] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.157 [2024-05-15 13:41:52.636474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.157 [2024-05-15 13:41:52.636489] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.157 [2024-05-15 13:41:52.636532] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.157 [2024-05-15 13:41:52.636539] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.157 [2024-05-15 13:41:52.636543] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636548] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.157 [2024-05-15 13:41:52.636559] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636564] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636569] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.157 [2024-05-15 13:41:52.636576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.157 [2024-05-15 13:41:52.636591] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.157 [2024-05-15 13:41:52.636637] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.157 [2024-05-15 13:41:52.636644] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.157 [2024-05-15 13:41:52.636648] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636653] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.157 [2024-05-15 13:41:52.636664] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636669] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636674] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.157 [2024-05-15 13:41:52.636681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.157 [2024-05-15 13:41:52.636695] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.157 [2024-05-15 13:41:52.636742] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.157 [2024-05-15 13:41:52.636749] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.157 [2024-05-15 13:41:52.636753] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636758] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.157 [2024-05-15 13:41:52.636769] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636774] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636779] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.157 [2024-05-15 13:41:52.636786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.157 [2024-05-15 13:41:52.636801] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.157 [2024-05-15 13:41:52.636847] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.157 [2024-05-15 13:41:52.636854] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.157 [2024-05-15 13:41:52.636859] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636863] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.157 [2024-05-15 13:41:52.636874] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636879] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636884] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.157 [2024-05-15 13:41:52.636891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.157 [2024-05-15 13:41:52.636906] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.157 [2024-05-15 13:41:52.636952] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.157 [2024-05-15 13:41:52.636959] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.157 [2024-05-15 13:41:52.636963] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636968] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.157 [2024-05-15 13:41:52.636979] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636984] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.636989] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.157 [2024-05-15 13:41:52.636996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.157 [2024-05-15 13:41:52.637011] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.157 [2024-05-15 13:41:52.637057] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.157 [2024-05-15 13:41:52.637064] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.157 [2024-05-15 13:41:52.637068] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.637073] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.157 [2024-05-15 13:41:52.637084] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.637089] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.637094] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.157 [2024-05-15 13:41:52.637101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.157 [2024-05-15 13:41:52.637115] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.157 [2024-05-15 13:41:52.637161] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.157 [2024-05-15 13:41:52.637168] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.157 [2024-05-15 13:41:52.637173] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.637178] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.157 [2024-05-15 13:41:52.637189] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.637194] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.637198] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.157 [2024-05-15 13:41:52.637206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.157 [2024-05-15 13:41:52.637220] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.157 [2024-05-15 13:41:52.637279] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.157 [2024-05-15 13:41:52.637288] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.157 [2024-05-15 13:41:52.637294] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.637300] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.157 [2024-05-15 13:41:52.637312] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.157 [2024-05-15 13:41:52.637318] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.637324] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.158 [2024-05-15 13:41:52.637333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.158 [2024-05-15 13:41:52.637351] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.158 [2024-05-15 13:41:52.637396] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.158 [2024-05-15 13:41:52.637404] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.158 [2024-05-15 13:41:52.637410] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.637416] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.158 [2024-05-15 13:41:52.637428] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.637444] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.637450] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.158 [2024-05-15 13:41:52.637459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.158 [2024-05-15 13:41:52.637476] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.158 [2024-05-15 13:41:52.637521] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.158 [2024-05-15 13:41:52.637530] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.158 [2024-05-15 13:41:52.637535] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.637541] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.158 [2024-05-15 13:41:52.637554] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.637560] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.637566] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.158 [2024-05-15 13:41:52.637575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.158 [2024-05-15 13:41:52.637591] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.158 [2024-05-15 13:41:52.637633] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.158 [2024-05-15 13:41:52.637641] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.158 [2024-05-15 13:41:52.637647] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.637653] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.158 [2024-05-15 13:41:52.637665] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.637672] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.637677] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.158 [2024-05-15 13:41:52.637686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.158 [2024-05-15 13:41:52.637703] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.158 [2024-05-15 13:41:52.637742] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.158 [2024-05-15 13:41:52.637750] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.158 [2024-05-15 13:41:52.637755] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.637761] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.158 [2024-05-15 13:41:52.637774] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.637780] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.637786] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.158 [2024-05-15 13:41:52.637795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.158 [2024-05-15 13:41:52.637812] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.158 [2024-05-15 13:41:52.637853] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.158 [2024-05-15 13:41:52.637861] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.158 [2024-05-15 13:41:52.637867] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.637873] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.158 [2024-05-15 13:41:52.637885] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.637891] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.637897] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.158 [2024-05-15 13:41:52.637906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.158 [2024-05-15 13:41:52.637922] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.158 [2024-05-15 13:41:52.637964] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.158 [2024-05-15 13:41:52.637972] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.158 [2024-05-15 13:41:52.637978] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.637984] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.158 [2024-05-15 13:41:52.637996] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.638002] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.638008] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.158 [2024-05-15 13:41:52.638017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.158 [2024-05-15 13:41:52.638033] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.158 [2024-05-15 13:41:52.638075] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.158 [2024-05-15 13:41:52.638083] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.158 [2024-05-15 13:41:52.638089] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.638095] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.158 [2024-05-15 13:41:52.638107] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.638113] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.638119] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.158 [2024-05-15 13:41:52.638128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.158 [2024-05-15 13:41:52.638144] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.158 [2024-05-15 13:41:52.638186] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.158 [2024-05-15 13:41:52.638194] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.158 [2024-05-15 13:41:52.638199] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.638205] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.158 [2024-05-15 13:41:52.638218] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.638224] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.638230] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.158 [2024-05-15 13:41:52.638255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.158 [2024-05-15 13:41:52.638276] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.158 [2024-05-15 13:41:52.638323] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.158 [2024-05-15 13:41:52.638330] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.158 [2024-05-15 13:41:52.638334] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.638339] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.158 [2024-05-15 13:41:52.638350] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.638355] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.638359] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.158 [2024-05-15 13:41:52.638367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.158 [2024-05-15 13:41:52.638381] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.158 [2024-05-15 13:41:52.638424] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.158 [2024-05-15 13:41:52.638431] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.158 [2024-05-15 13:41:52.638435] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.638440] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.158 [2024-05-15 13:41:52.638451] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.638456] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.638461] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.158 [2024-05-15 13:41:52.638468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.158 [2024-05-15 13:41:52.638483] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.158 [2024-05-15 13:41:52.638529] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.158 [2024-05-15 13:41:52.638536] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.158 [2024-05-15 13:41:52.638540] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.638545] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.158 [2024-05-15 13:41:52.638556] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.638561] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.638566] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.158 [2024-05-15 13:41:52.638573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.158 [2024-05-15 13:41:52.638587] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.158 [2024-05-15 13:41:52.638630] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.158 [2024-05-15 13:41:52.638641] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.158 [2024-05-15 13:41:52.638645] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.158 [2024-05-15 13:41:52.638650] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.158 [2024-05-15 13:41:52.638662] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.159 [2024-05-15 13:41:52.638667] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.159 [2024-05-15 13:41:52.638671] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.159 [2024-05-15 13:41:52.638678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.159 [2024-05-15 13:41:52.638693] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.159 [2024-05-15 13:41:52.638739] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.159 [2024-05-15 13:41:52.638746] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.159 [2024-05-15 13:41:52.638750] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.159 [2024-05-15 13:41:52.638755] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.159 [2024-05-15 13:41:52.638766] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.159 [2024-05-15 13:41:52.638771] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.159 [2024-05-15 13:41:52.638776] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.159 [2024-05-15 13:41:52.638783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.159 [2024-05-15 13:41:52.638797] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.159 [2024-05-15 13:41:52.638844] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.159 [2024-05-15 13:41:52.638851] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.159 [2024-05-15 13:41:52.638855] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.159 [2024-05-15 13:41:52.638860] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.159 [2024-05-15 13:41:52.638871] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.159 [2024-05-15 13:41:52.638876] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.159 [2024-05-15 13:41:52.638880] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.159 [2024-05-15 13:41:52.638888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.159 [2024-05-15 13:41:52.638902] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.159 [2024-05-15 13:41:52.638951] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.159 [2024-05-15 13:41:52.638958] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.159 [2024-05-15 13:41:52.638962] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.159 [2024-05-15 13:41:52.638967] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.159 [2024-05-15 13:41:52.638978] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.159 [2024-05-15 13:41:52.638983] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.159 [2024-05-15 13:41:52.638988] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.159 [2024-05-15 13:41:52.638995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.159 [2024-05-15 13:41:52.639010] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.159 [2024-05-15 13:41:52.639047] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.159 [2024-05-15 13:41:52.639054] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.159 [2024-05-15 13:41:52.639058] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.159 [2024-05-15 13:41:52.639063] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.159 [2024-05-15 13:41:52.639074] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.159 [2024-05-15 13:41:52.639079] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.159 [2024-05-15 13:41:52.639084] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.159 [2024-05-15 13:41:52.639091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.159 [2024-05-15 13:41:52.639106] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.159 [2024-05-15 13:41:52.639145] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.159 [2024-05-15 13:41:52.639156] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.159 [2024-05-15 13:41:52.639160] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.159 [2024-05-15 13:41:52.639165] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.159 [2024-05-15 13:41:52.639176] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.159 [2024-05-15 13:41:52.639181] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.159 [2024-05-15 13:41:52.639186] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.159 [2024-05-15 13:41:52.639193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.159 [2024-05-15 13:41:52.639208] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.159 [2024-05-15 13:41:52.652297] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.159 [2024-05-15 13:41:52.652336] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.159 [2024-05-15 13:41:52.652342] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.159 [2024-05-15 13:41:52.652348] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.159 [2024-05-15 13:41:52.652370] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.159 [2024-05-15 13:41:52.652376] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.159 [2024-05-15 13:41:52.652381] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10a2590) 00:23:40.159 [2024-05-15 13:41:52.652393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.159 [2024-05-15 13:41:52.652433] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10e9750, cid 3, qid 0 00:23:40.159 [2024-05-15 13:41:52.652499] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.159 [2024-05-15 13:41:52.652506] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.159 [2024-05-15 13:41:52.652511] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.159 [2024-05-15 13:41:52.652515] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10e9750) on tqpair=0x10a2590 00:23:40.159 [2024-05-15 13:41:52.652525] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 22 milliseconds 00:23:40.159 00:23:40.159 13:41:52 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:40.159 [2024-05-15 13:41:52.683038] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:23:40.159 [2024-05-15 13:41:52.683099] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89661 ] 00:23:40.159 [2024-05-15 13:41:52.807910] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:40.159 [2024-05-15 13:41:52.817481] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:40.159 [2024-05-15 13:41:52.817561] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:40.159 [2024-05-15 13:41:52.817573] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:40.159 [2024-05-15 13:41:52.817593] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:40.159 [2024-05-15 13:41:52.817611] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:23:40.159 [2024-05-15 13:41:52.817799] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:40.159 [2024-05-15 13:41:52.817871] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x12a9590 0 00:23:40.159 [2024-05-15 13:41:52.833270] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:40.159 [2024-05-15 13:41:52.833302] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:40.159 [2024-05-15 13:41:52.833322] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:40.159 [2024-05-15 13:41:52.833330] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:40.159 [2024-05-15 13:41:52.833401] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.159 [2024-05-15 13:41:52.833412] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.159 [2024-05-15 13:41:52.833421] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a9590) 00:23:40.159 [2024-05-15 13:41:52.833455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:40.159 [2024-05-15 13:41:52.833501] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0330, cid 0, qid 0 00:23:40.159 [2024-05-15 13:41:52.859316] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.159 [2024-05-15 13:41:52.859356] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.159 [2024-05-15 13:41:52.859365] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.159 [2024-05-15 13:41:52.859374] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0330) on tqpair=0x12a9590 00:23:40.159 [2024-05-15 13:41:52.859402] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:40.159 [2024-05-15 13:41:52.859417] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:40.159 [2024-05-15 13:41:52.859427] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:40.160 [2024-05-15 13:41:52.859461] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.859469] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.859477] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a9590) 00:23:40.160 [2024-05-15 13:41:52.859498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.160 [2024-05-15 13:41:52.859569] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0330, cid 0, qid 0 00:23:40.160 [2024-05-15 13:41:52.859717] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.160 [2024-05-15 13:41:52.859739] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.160 [2024-05-15 13:41:52.859748] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.859757] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0330) on tqpair=0x12a9590 00:23:40.160 [2024-05-15 13:41:52.859772] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:40.160 [2024-05-15 13:41:52.859786] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:40.160 [2024-05-15 13:41:52.859798] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.859805] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.859812] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a9590) 00:23:40.160 [2024-05-15 13:41:52.859825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.160 [2024-05-15 13:41:52.859856] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0330, cid 0, qid 0 00:23:40.160 [2024-05-15 13:41:52.859933] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.160 [2024-05-15 13:41:52.859950] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.160 [2024-05-15 13:41:52.859958] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.859967] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0330) on tqpair=0x12a9590 00:23:40.160 [2024-05-15 13:41:52.859979] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:40.160 [2024-05-15 13:41:52.859997] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:40.160 [2024-05-15 13:41:52.860012] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.860021] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.860029] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a9590) 00:23:40.160 [2024-05-15 13:41:52.860043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.160 [2024-05-15 13:41:52.860072] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0330, cid 0, qid 0 00:23:40.160 [2024-05-15 13:41:52.860154] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.160 [2024-05-15 13:41:52.860171] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.160 [2024-05-15 13:41:52.860179] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.860188] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0330) on tqpair=0x12a9590 00:23:40.160 [2024-05-15 13:41:52.860201] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:40.160 [2024-05-15 13:41:52.860216] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.860223] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.860230] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a9590) 00:23:40.160 [2024-05-15 13:41:52.860256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.160 [2024-05-15 13:41:52.860285] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0330, cid 0, qid 0 00:23:40.160 [2024-05-15 13:41:52.860366] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.160 [2024-05-15 13:41:52.860382] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.160 [2024-05-15 13:41:52.860390] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.860398] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0330) on tqpair=0x12a9590 00:23:40.160 [2024-05-15 13:41:52.860409] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:40.160 [2024-05-15 13:41:52.860420] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:40.160 [2024-05-15 13:41:52.860436] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:40.160 [2024-05-15 13:41:52.860546] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:40.160 [2024-05-15 13:41:52.860564] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:40.160 [2024-05-15 13:41:52.860582] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.860590] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.860598] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a9590) 00:23:40.160 [2024-05-15 13:41:52.860612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.160 [2024-05-15 13:41:52.860641] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0330, cid 0, qid 0 00:23:40.160 [2024-05-15 13:41:52.860727] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.160 [2024-05-15 13:41:52.860744] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.160 [2024-05-15 13:41:52.860752] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.860760] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0330) on tqpair=0x12a9590 00:23:40.160 [2024-05-15 13:41:52.860773] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:40.160 [2024-05-15 13:41:52.860791] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.860800] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.860809] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a9590) 00:23:40.160 [2024-05-15 13:41:52.860822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.160 [2024-05-15 13:41:52.860849] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0330, cid 0, qid 0 00:23:40.160 [2024-05-15 13:41:52.860915] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.160 [2024-05-15 13:41:52.860930] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.160 [2024-05-15 13:41:52.860938] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.860947] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0330) on tqpair=0x12a9590 00:23:40.160 [2024-05-15 13:41:52.860958] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:40.160 [2024-05-15 13:41:52.860969] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:40.160 [2024-05-15 13:41:52.860984] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:40.160 [2024-05-15 13:41:52.861014] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:40.160 [2024-05-15 13:41:52.861034] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.861043] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a9590) 00:23:40.160 [2024-05-15 13:41:52.861057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.160 [2024-05-15 13:41:52.861084] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0330, cid 0, qid 0 00:23:40.160 [2024-05-15 13:41:52.861252] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:40.160 [2024-05-15 13:41:52.861272] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:40.160 [2024-05-15 13:41:52.861280] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.861288] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a9590): datao=0, datal=4096, cccid=0 00:23:40.160 [2024-05-15 13:41:52.861298] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12f0330) on tqpair(0x12a9590): expected_datao=0, payload_size=4096 00:23:40.160 [2024-05-15 13:41:52.861308] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.861324] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.861332] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.861347] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.160 [2024-05-15 13:41:52.861357] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.160 [2024-05-15 13:41:52.861364] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.861373] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0330) on tqpair=0x12a9590 00:23:40.160 [2024-05-15 13:41:52.861392] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:40.160 [2024-05-15 13:41:52.861402] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:40.160 [2024-05-15 13:41:52.861413] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:40.160 [2024-05-15 13:41:52.861421] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:40.160 [2024-05-15 13:41:52.861450] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:40.160 [2024-05-15 13:41:52.861476] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:40.160 [2024-05-15 13:41:52.861496] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:40.160 [2024-05-15 13:41:52.861517] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.861526] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.861534] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a9590) 00:23:40.160 [2024-05-15 13:41:52.861548] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:40.160 [2024-05-15 13:41:52.861583] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0330, cid 0, qid 0 00:23:40.160 [2024-05-15 13:41:52.861650] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.160 [2024-05-15 13:41:52.861661] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.160 [2024-05-15 13:41:52.861670] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.861678] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0330) on tqpair=0x12a9590 00:23:40.160 [2024-05-15 13:41:52.861692] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.861699] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.160 [2024-05-15 13:41:52.861707] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a9590) 00:23:40.161 [2024-05-15 13:41:52.861720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.161 [2024-05-15 13:41:52.861733] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.861740] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.861750] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x12a9590) 00:23:40.161 [2024-05-15 13:41:52.861760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.161 [2024-05-15 13:41:52.861773] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.861781] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.861790] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x12a9590) 00:23:40.161 [2024-05-15 13:41:52.861802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.161 [2024-05-15 13:41:52.861813] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.861820] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.861827] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.161 [2024-05-15 13:41:52.861838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.161 [2024-05-15 13:41:52.861848] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:40.161 [2024-05-15 13:41:52.861870] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:40.161 [2024-05-15 13:41:52.861885] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.861893] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a9590) 00:23:40.161 [2024-05-15 13:41:52.861906] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.161 [2024-05-15 13:41:52.861941] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0330, cid 0, qid 0 00:23:40.161 [2024-05-15 13:41:52.861958] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0490, cid 1, qid 0 00:23:40.161 [2024-05-15 13:41:52.861967] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f05f0, cid 2, qid 0 00:23:40.161 [2024-05-15 13:41:52.861977] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.161 [2024-05-15 13:41:52.861986] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f08b0, cid 4, qid 0 00:23:40.161 [2024-05-15 13:41:52.862051] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.161 [2024-05-15 13:41:52.862067] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.161 [2024-05-15 13:41:52.862076] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.862084] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f08b0) on tqpair=0x12a9590 00:23:40.161 [2024-05-15 13:41:52.862097] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:40.161 [2024-05-15 13:41:52.862107] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:40.161 [2024-05-15 13:41:52.862128] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:40.161 [2024-05-15 13:41:52.862142] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:40.161 [2024-05-15 13:41:52.862155] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.862164] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.862171] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a9590) 00:23:40.161 [2024-05-15 13:41:52.862183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:40.161 [2024-05-15 13:41:52.862212] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f08b0, cid 4, qid 0 00:23:40.161 [2024-05-15 13:41:52.862269] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.161 [2024-05-15 13:41:52.862282] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.161 [2024-05-15 13:41:52.862290] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.862298] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f08b0) on tqpair=0x12a9590 00:23:40.161 [2024-05-15 13:41:52.862374] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:40.161 [2024-05-15 13:41:52.862395] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:40.161 [2024-05-15 13:41:52.862412] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.862420] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a9590) 00:23:40.161 [2024-05-15 13:41:52.862433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.161 [2024-05-15 13:41:52.862463] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f08b0, cid 4, qid 0 00:23:40.161 [2024-05-15 13:41:52.862541] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:40.161 [2024-05-15 13:41:52.862559] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:40.161 [2024-05-15 13:41:52.862567] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.862575] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a9590): datao=0, datal=4096, cccid=4 00:23:40.161 [2024-05-15 13:41:52.862586] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12f08b0) on tqpair(0x12a9590): expected_datao=0, payload_size=4096 00:23:40.161 [2024-05-15 13:41:52.862595] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.862607] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.862615] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.862630] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.161 [2024-05-15 13:41:52.862642] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.161 [2024-05-15 13:41:52.862649] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.862657] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f08b0) on tqpair=0x12a9590 00:23:40.161 [2024-05-15 13:41:52.862682] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:40.161 [2024-05-15 13:41:52.862701] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:40.161 [2024-05-15 13:41:52.862720] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:40.161 [2024-05-15 13:41:52.862735] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.862743] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a9590) 00:23:40.161 [2024-05-15 13:41:52.862756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.161 [2024-05-15 13:41:52.862785] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f08b0, cid 4, qid 0 00:23:40.161 [2024-05-15 13:41:52.862851] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:40.161 [2024-05-15 13:41:52.862866] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:40.161 [2024-05-15 13:41:52.862873] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.862882] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a9590): datao=0, datal=4096, cccid=4 00:23:40.161 [2024-05-15 13:41:52.862892] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12f08b0) on tqpair(0x12a9590): expected_datao=0, payload_size=4096 00:23:40.161 [2024-05-15 13:41:52.862902] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.862913] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.862919] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.862934] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.161 [2024-05-15 13:41:52.862945] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.161 [2024-05-15 13:41:52.862952] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.862960] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f08b0) on tqpair=0x12a9590 00:23:40.161 [2024-05-15 13:41:52.862982] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:40.161 [2024-05-15 13:41:52.862997] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:40.161 [2024-05-15 13:41:52.863010] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.863018] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a9590) 00:23:40.161 [2024-05-15 13:41:52.863031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.161 [2024-05-15 13:41:52.863062] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f08b0, cid 4, qid 0 00:23:40.161 [2024-05-15 13:41:52.863112] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:40.161 [2024-05-15 13:41:52.863128] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:40.161 [2024-05-15 13:41:52.863136] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.863144] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a9590): datao=0, datal=4096, cccid=4 00:23:40.161 [2024-05-15 13:41:52.863153] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12f08b0) on tqpair(0x12a9590): expected_datao=0, payload_size=4096 00:23:40.161 [2024-05-15 13:41:52.863163] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.863176] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.863184] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.863199] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.161 [2024-05-15 13:41:52.863208] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.161 [2024-05-15 13:41:52.863214] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.161 [2024-05-15 13:41:52.863222] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f08b0) on tqpair=0x12a9590 00:23:40.161 [2024-05-15 13:41:52.863262] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:40.161 [2024-05-15 13:41:52.863280] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:40.161 [2024-05-15 13:41:52.863300] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:40.161 [2024-05-15 13:41:52.863312] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:40.161 [2024-05-15 13:41:52.863322] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:40.161 [2024-05-15 13:41:52.863333] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:40.161 [2024-05-15 13:41:52.863343] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:40.162 [2024-05-15 13:41:52.863354] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:40.162 [2024-05-15 13:41:52.863389] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.863398] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a9590) 00:23:40.162 [2024-05-15 13:41:52.863412] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.162 [2024-05-15 13:41:52.863424] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.863431] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.863438] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12a9590) 00:23:40.162 [2024-05-15 13:41:52.863450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.162 [2024-05-15 13:41:52.863490] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f08b0, cid 4, qid 0 00:23:40.162 [2024-05-15 13:41:52.863503] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0a10, cid 5, qid 0 00:23:40.162 [2024-05-15 13:41:52.863558] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.162 [2024-05-15 13:41:52.863569] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.162 [2024-05-15 13:41:52.863576] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.863585] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f08b0) on tqpair=0x12a9590 00:23:40.162 [2024-05-15 13:41:52.863599] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.162 [2024-05-15 13:41:52.863610] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.162 [2024-05-15 13:41:52.863618] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.863625] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0a10) on tqpair=0x12a9590 00:23:40.162 [2024-05-15 13:41:52.863646] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.863656] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12a9590) 00:23:40.162 [2024-05-15 13:41:52.863669] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.162 [2024-05-15 13:41:52.863692] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0a10, cid 5, qid 0 00:23:40.162 [2024-05-15 13:41:52.863748] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.162 [2024-05-15 13:41:52.863760] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.162 [2024-05-15 13:41:52.863768] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.863777] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0a10) on tqpair=0x12a9590 00:23:40.162 [2024-05-15 13:41:52.863796] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.863803] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12a9590) 00:23:40.162 [2024-05-15 13:41:52.863815] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.162 [2024-05-15 13:41:52.863842] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0a10, cid 5, qid 0 00:23:40.162 [2024-05-15 13:41:52.863894] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.162 [2024-05-15 13:41:52.863907] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.162 [2024-05-15 13:41:52.863915] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.863922] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0a10) on tqpair=0x12a9590 00:23:40.162 [2024-05-15 13:41:52.863940] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.863949] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12a9590) 00:23:40.162 [2024-05-15 13:41:52.863961] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.162 [2024-05-15 13:41:52.863988] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0a10, cid 5, qid 0 00:23:40.162 [2024-05-15 13:41:52.864033] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.162 [2024-05-15 13:41:52.864045] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.162 [2024-05-15 13:41:52.864052] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864059] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0a10) on tqpair=0x12a9590 00:23:40.162 [2024-05-15 13:41:52.864083] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864092] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12a9590) 00:23:40.162 [2024-05-15 13:41:52.864104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.162 [2024-05-15 13:41:52.864116] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864125] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a9590) 00:23:40.162 [2024-05-15 13:41:52.864136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.162 [2024-05-15 13:41:52.864150] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864158] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x12a9590) 00:23:40.162 [2024-05-15 13:41:52.864168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.162 [2024-05-15 13:41:52.864189] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864197] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x12a9590) 00:23:40.162 [2024-05-15 13:41:52.864209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.162 [2024-05-15 13:41:52.864251] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0a10, cid 5, qid 0 00:23:40.162 [2024-05-15 13:41:52.864263] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f08b0, cid 4, qid 0 00:23:40.162 [2024-05-15 13:41:52.864273] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0b70, cid 6, qid 0 00:23:40.162 [2024-05-15 13:41:52.864282] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0cd0, cid 7, qid 0 00:23:40.162 [2024-05-15 13:41:52.864402] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:40.162 [2024-05-15 13:41:52.864421] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:40.162 [2024-05-15 13:41:52.864429] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864436] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a9590): datao=0, datal=8192, cccid=5 00:23:40.162 [2024-05-15 13:41:52.864447] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12f0a10) on tqpair(0x12a9590): expected_datao=0, payload_size=8192 00:23:40.162 [2024-05-15 13:41:52.864456] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864482] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864491] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864501] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:40.162 [2024-05-15 13:41:52.864511] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:40.162 [2024-05-15 13:41:52.864519] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864527] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a9590): datao=0, datal=512, cccid=4 00:23:40.162 [2024-05-15 13:41:52.864537] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12f08b0) on tqpair(0x12a9590): expected_datao=0, payload_size=512 00:23:40.162 [2024-05-15 13:41:52.864546] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864556] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864563] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864575] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:40.162 [2024-05-15 13:41:52.864585] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:40.162 [2024-05-15 13:41:52.864593] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864601] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a9590): datao=0, datal=512, cccid=6 00:23:40.162 [2024-05-15 13:41:52.864611] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12f0b70) on tqpair(0x12a9590): expected_datao=0, payload_size=512 00:23:40.162 [2024-05-15 13:41:52.864620] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864633] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864641] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864651] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:40.162 [2024-05-15 13:41:52.864660] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:40.162 [2024-05-15 13:41:52.864666] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864673] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a9590): datao=0, datal=4096, cccid=7 00:23:40.162 [2024-05-15 13:41:52.864682] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12f0cd0) on tqpair(0x12a9590): expected_datao=0, payload_size=4096 00:23:40.162 [2024-05-15 13:41:52.864691] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864704] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864712] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864722] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.162 [2024-05-15 13:41:52.864732] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.162 [2024-05-15 13:41:52.864739] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864748] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0a10) on tqpair=0x12a9590 00:23:40.162 [2024-05-15 13:41:52.864787] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.162 [2024-05-15 13:41:52.864803] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.162 [2024-05-15 13:41:52.864810] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864818] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f08b0) on tqpair=0x12a9590 00:23:40.162 [2024-05-15 13:41:52.864838] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.162 [2024-05-15 13:41:52.864850] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.162 [2024-05-15 13:41:52.864859] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864867] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0b70) on tqpair=0x12a9590 00:23:40.162 [2024-05-15 13:41:52.864886] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.162 [2024-05-15 13:41:52.864897] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.162 [2024-05-15 13:41:52.864904] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.162 [2024-05-15 13:41:52.864911] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0cd0) on tqpair=0x12a9590 00:23:40.162 ===================================================== 00:23:40.162 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:40.162 ===================================================== 00:23:40.162 Controller Capabilities/Features 00:23:40.162 ================================ 00:23:40.163 Vendor ID: 8086 00:23:40.163 Subsystem Vendor ID: 8086 00:23:40.163 Serial Number: SPDK00000000000001 00:23:40.163 Model Number: SPDK bdev Controller 00:23:40.163 Firmware Version: 24.05 00:23:40.163 Recommended Arb Burst: 6 00:23:40.163 IEEE OUI Identifier: e4 d2 5c 00:23:40.163 Multi-path I/O 00:23:40.163 May have multiple subsystem ports: Yes 00:23:40.163 May have multiple controllers: Yes 00:23:40.163 Associated with SR-IOV VF: No 00:23:40.163 Max Data Transfer Size: 131072 00:23:40.163 Max Number of Namespaces: 32 00:23:40.163 Max Number of I/O Queues: 127 00:23:40.163 NVMe Specification Version (VS): 1.3 00:23:40.163 NVMe Specification Version (Identify): 1.3 00:23:40.163 Maximum Queue Entries: 128 00:23:40.163 Contiguous Queues Required: Yes 00:23:40.163 Arbitration Mechanisms Supported 00:23:40.163 Weighted Round Robin: Not Supported 00:23:40.163 Vendor Specific: Not Supported 00:23:40.163 Reset Timeout: 15000 ms 00:23:40.163 Doorbell Stride: 4 bytes 00:23:40.163 NVM Subsystem Reset: Not Supported 00:23:40.163 Command Sets Supported 00:23:40.163 NVM Command Set: Supported 00:23:40.163 Boot Partition: Not Supported 00:23:40.163 Memory Page Size Minimum: 4096 bytes 00:23:40.163 Memory Page Size Maximum: 4096 bytes 00:23:40.163 Persistent Memory Region: Not Supported 00:23:40.163 Optional Asynchronous Events Supported 00:23:40.163 Namespace Attribute Notices: Supported 00:23:40.163 Firmware Activation Notices: Not Supported 00:23:40.163 ANA Change Notices: Not Supported 00:23:40.163 PLE Aggregate Log Change Notices: Not Supported 00:23:40.163 LBA Status Info Alert Notices: Not Supported 00:23:40.163 EGE Aggregate Log Change Notices: Not Supported 00:23:40.163 Normal NVM Subsystem Shutdown event: Not Supported 00:23:40.163 Zone Descriptor Change Notices: Not Supported 00:23:40.163 Discovery Log Change Notices: Not Supported 00:23:40.163 Controller Attributes 00:23:40.163 128-bit Host Identifier: Supported 00:23:40.163 Non-Operational Permissive Mode: Not Supported 00:23:40.163 NVM Sets: Not Supported 00:23:40.163 Read Recovery Levels: Not Supported 00:23:40.163 Endurance Groups: Not Supported 00:23:40.163 Predictable Latency Mode: Not Supported 00:23:40.163 Traffic Based Keep ALive: Not Supported 00:23:40.163 Namespace Granularity: Not Supported 00:23:40.163 SQ Associations: Not Supported 00:23:40.163 UUID List: Not Supported 00:23:40.163 Multi-Domain Subsystem: Not Supported 00:23:40.163 Fixed Capacity Management: Not Supported 00:23:40.163 Variable Capacity Management: Not Supported 00:23:40.163 Delete Endurance Group: Not Supported 00:23:40.163 Delete NVM Set: Not Supported 00:23:40.163 Extended LBA Formats Supported: Not Supported 00:23:40.163 Flexible Data Placement Supported: Not Supported 00:23:40.163 00:23:40.163 Controller Memory Buffer Support 00:23:40.163 ================================ 00:23:40.163 Supported: No 00:23:40.163 00:23:40.163 Persistent Memory Region Support 00:23:40.163 ================================ 00:23:40.163 Supported: No 00:23:40.163 00:23:40.163 Admin Command Set Attributes 00:23:40.163 ============================ 00:23:40.163 Security Send/Receive: Not Supported 00:23:40.163 Format NVM: Not Supported 00:23:40.163 Firmware Activate/Download: Not Supported 00:23:40.163 Namespace Management: Not Supported 00:23:40.163 Device Self-Test: Not Supported 00:23:40.163 Directives: Not Supported 00:23:40.163 NVMe-MI: Not Supported 00:23:40.163 Virtualization Management: Not Supported 00:23:40.163 Doorbell Buffer Config: Not Supported 00:23:40.163 Get LBA Status Capability: Not Supported 00:23:40.163 Command & Feature Lockdown Capability: Not Supported 00:23:40.163 Abort Command Limit: 4 00:23:40.163 Async Event Request Limit: 4 00:23:40.163 Number of Firmware Slots: N/A 00:23:40.163 Firmware Slot 1 Read-Only: N/A 00:23:40.163 Firmware Activation Without Reset: N/A 00:23:40.163 Multiple Update Detection Support: N/A 00:23:40.163 Firmware Update Granularity: No Information Provided 00:23:40.163 Per-Namespace SMART Log: No 00:23:40.163 Asymmetric Namespace Access Log Page: Not Supported 00:23:40.163 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:40.163 Command Effects Log Page: Supported 00:23:40.163 Get Log Page Extended Data: Supported 00:23:40.163 Telemetry Log Pages: Not Supported 00:23:40.163 Persistent Event Log Pages: Not Supported 00:23:40.163 Supported Log Pages Log Page: May Support 00:23:40.163 Commands Supported & Effects Log Page: Not Supported 00:23:40.163 Feature Identifiers & Effects Log Page:May Support 00:23:40.163 NVMe-MI Commands & Effects Log Page: May Support 00:23:40.163 Data Area 4 for Telemetry Log: Not Supported 00:23:40.163 Error Log Page Entries Supported: 128 00:23:40.163 Keep Alive: Supported 00:23:40.163 Keep Alive Granularity: 10000 ms 00:23:40.163 00:23:40.163 NVM Command Set Attributes 00:23:40.163 ========================== 00:23:40.163 Submission Queue Entry Size 00:23:40.163 Max: 64 00:23:40.163 Min: 64 00:23:40.163 Completion Queue Entry Size 00:23:40.163 Max: 16 00:23:40.163 Min: 16 00:23:40.163 Number of Namespaces: 32 00:23:40.163 Compare Command: Supported 00:23:40.163 Write Uncorrectable Command: Not Supported 00:23:40.163 Dataset Management Command: Supported 00:23:40.163 Write Zeroes Command: Supported 00:23:40.163 Set Features Save Field: Not Supported 00:23:40.163 Reservations: Supported 00:23:40.163 Timestamp: Not Supported 00:23:40.163 Copy: Supported 00:23:40.163 Volatile Write Cache: Present 00:23:40.163 Atomic Write Unit (Normal): 1 00:23:40.163 Atomic Write Unit (PFail): 1 00:23:40.163 Atomic Compare & Write Unit: 1 00:23:40.163 Fused Compare & Write: Supported 00:23:40.163 Scatter-Gather List 00:23:40.163 SGL Command Set: Supported 00:23:40.163 SGL Keyed: Supported 00:23:40.163 SGL Bit Bucket Descriptor: Not Supported 00:23:40.163 SGL Metadata Pointer: Not Supported 00:23:40.163 Oversized SGL: Not Supported 00:23:40.163 SGL Metadata Address: Not Supported 00:23:40.163 SGL Offset: Supported 00:23:40.163 Transport SGL Data Block: Not Supported 00:23:40.163 Replay Protected Memory Block: Not Supported 00:23:40.163 00:23:40.163 Firmware Slot Information 00:23:40.163 ========================= 00:23:40.163 Active slot: 1 00:23:40.163 Slot 1 Firmware Revision: 24.05 00:23:40.163 00:23:40.163 00:23:40.163 Commands Supported and Effects 00:23:40.163 ============================== 00:23:40.163 Admin Commands 00:23:40.163 -------------- 00:23:40.163 Get Log Page (02h): Supported 00:23:40.163 Identify (06h): Supported 00:23:40.163 Abort (08h): Supported 00:23:40.163 Set Features (09h): Supported 00:23:40.163 Get Features (0Ah): Supported 00:23:40.163 Asynchronous Event Request (0Ch): Supported 00:23:40.163 Keep Alive (18h): Supported 00:23:40.163 I/O Commands 00:23:40.163 ------------ 00:23:40.163 Flush (00h): Supported LBA-Change 00:23:40.163 Write (01h): Supported LBA-Change 00:23:40.163 Read (02h): Supported 00:23:40.163 Compare (05h): Supported 00:23:40.163 Write Zeroes (08h): Supported LBA-Change 00:23:40.163 Dataset Management (09h): Supported LBA-Change 00:23:40.163 Copy (19h): Supported LBA-Change 00:23:40.163 Unknown (79h): Supported LBA-Change 00:23:40.163 Unknown (7Ah): Supported 00:23:40.163 00:23:40.163 Error Log 00:23:40.163 ========= 00:23:40.163 00:23:40.163 Arbitration 00:23:40.163 =========== 00:23:40.163 Arbitration Burst: 1 00:23:40.163 00:23:40.163 Power Management 00:23:40.163 ================ 00:23:40.163 Number of Power States: 1 00:23:40.163 Current Power State: Power State #0 00:23:40.163 Power State #0: 00:23:40.163 Max Power: 0.00 W 00:23:40.164 Non-Operational State: Operational 00:23:40.164 Entry Latency: Not Reported 00:23:40.164 Exit Latency: Not Reported 00:23:40.164 Relative Read Throughput: 0 00:23:40.164 Relative Read Latency: 0 00:23:40.164 Relative Write Throughput: 0 00:23:40.164 Relative Write Latency: 0 00:23:40.164 Idle Power: Not Reported 00:23:40.164 Active Power: Not Reported 00:23:40.164 Non-Operational Permissive Mode: Not Supported 00:23:40.164 00:23:40.164 Health Information 00:23:40.164 ================== 00:23:40.164 Critical Warnings: 00:23:40.164 Available Spare Space: OK 00:23:40.164 Temperature: OK 00:23:40.164 Device Reliability: OK 00:23:40.164 Read Only: No 00:23:40.164 Volatile Memory Backup: OK 00:23:40.164 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:40.164 Temperature Threshold: [2024-05-15 13:41:52.865089] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.865100] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x12a9590) 00:23:40.164 [2024-05-15 13:41:52.865114] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.164 [2024-05-15 13:41:52.865149] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0cd0, cid 7, qid 0 00:23:40.164 [2024-05-15 13:41:52.865195] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.164 [2024-05-15 13:41:52.865205] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.164 [2024-05-15 13:41:52.865213] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.865221] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0cd0) on tqpair=0x12a9590 00:23:40.164 [2024-05-15 13:41:52.865297] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:40.164 [2024-05-15 13:41:52.865320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.164 [2024-05-15 13:41:52.865334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.164 [2024-05-15 13:41:52.865346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.164 [2024-05-15 13:41:52.865358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.164 [2024-05-15 13:41:52.865374] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.865383] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.865391] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.164 [2024-05-15 13:41:52.865403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.164 [2024-05-15 13:41:52.865452] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.164 [2024-05-15 13:41:52.865517] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.164 [2024-05-15 13:41:52.865531] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.164 [2024-05-15 13:41:52.865539] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.865547] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.164 [2024-05-15 13:41:52.865561] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.865568] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.865576] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.164 [2024-05-15 13:41:52.865589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.164 [2024-05-15 13:41:52.865623] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.164 [2024-05-15 13:41:52.865698] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.164 [2024-05-15 13:41:52.865710] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.164 [2024-05-15 13:41:52.865716] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.865724] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.164 [2024-05-15 13:41:52.865736] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:40.164 [2024-05-15 13:41:52.865747] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:40.164 [2024-05-15 13:41:52.865765] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.865774] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.865782] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.164 [2024-05-15 13:41:52.865795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.164 [2024-05-15 13:41:52.865825] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.164 [2024-05-15 13:41:52.865869] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.164 [2024-05-15 13:41:52.865881] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.164 [2024-05-15 13:41:52.865889] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.865896] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.164 [2024-05-15 13:41:52.865916] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.865925] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.865932] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.164 [2024-05-15 13:41:52.865945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.164 [2024-05-15 13:41:52.865974] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.164 [2024-05-15 13:41:52.866019] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.164 [2024-05-15 13:41:52.866032] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.164 [2024-05-15 13:41:52.866040] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.866049] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.164 [2024-05-15 13:41:52.866067] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.866076] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.866084] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.164 [2024-05-15 13:41:52.866097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.164 [2024-05-15 13:41:52.866126] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.164 [2024-05-15 13:41:52.866172] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.164 [2024-05-15 13:41:52.866186] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.164 [2024-05-15 13:41:52.866194] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.866202] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.164 [2024-05-15 13:41:52.866220] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.866229] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.866236] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.164 [2024-05-15 13:41:52.866248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.164 [2024-05-15 13:41:52.866288] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.164 [2024-05-15 13:41:52.866332] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.164 [2024-05-15 13:41:52.866342] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.164 [2024-05-15 13:41:52.866349] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.866357] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.164 [2024-05-15 13:41:52.866374] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.866381] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.866390] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.164 [2024-05-15 13:41:52.866403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.164 [2024-05-15 13:41:52.866432] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.164 [2024-05-15 13:41:52.866474] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.164 [2024-05-15 13:41:52.866501] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.164 [2024-05-15 13:41:52.866509] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.866518] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.164 [2024-05-15 13:41:52.866537] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.866545] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.866553] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.164 [2024-05-15 13:41:52.866567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.164 [2024-05-15 13:41:52.866593] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.164 [2024-05-15 13:41:52.866638] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.164 [2024-05-15 13:41:52.866651] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.164 [2024-05-15 13:41:52.866659] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.866668] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.164 [2024-05-15 13:41:52.866687] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.866696] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.866705] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.164 [2024-05-15 13:41:52.866718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.164 [2024-05-15 13:41:52.866743] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.164 [2024-05-15 13:41:52.866788] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.164 [2024-05-15 13:41:52.866799] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.164 [2024-05-15 13:41:52.866807] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.866815] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.164 [2024-05-15 13:41:52.866834] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.866843] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.164 [2024-05-15 13:41:52.866851] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.165 [2024-05-15 13:41:52.866863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.165 [2024-05-15 13:41:52.866889] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.165 [2024-05-15 13:41:52.866945] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.165 [2024-05-15 13:41:52.866954] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.165 [2024-05-15 13:41:52.866962] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.866970] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.165 [2024-05-15 13:41:52.866989] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.866998] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.867006] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.165 [2024-05-15 13:41:52.867020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.165 [2024-05-15 13:41:52.867044] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.165 [2024-05-15 13:41:52.867086] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.165 [2024-05-15 13:41:52.867098] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.165 [2024-05-15 13:41:52.867107] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.867115] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.165 [2024-05-15 13:41:52.867134] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.867143] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.867150] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.165 [2024-05-15 13:41:52.867163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.165 [2024-05-15 13:41:52.867189] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.165 [2024-05-15 13:41:52.867229] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.165 [2024-05-15 13:41:52.867256] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.165 [2024-05-15 13:41:52.867264] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.867272] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.165 [2024-05-15 13:41:52.867289] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.867299] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.867307] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.165 [2024-05-15 13:41:52.867320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.165 [2024-05-15 13:41:52.867347] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.165 [2024-05-15 13:41:52.867400] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.165 [2024-05-15 13:41:52.867412] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.165 [2024-05-15 13:41:52.867419] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.867427] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.165 [2024-05-15 13:41:52.867441] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.867448] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.867455] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.165 [2024-05-15 13:41:52.867467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.165 [2024-05-15 13:41:52.867492] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.165 [2024-05-15 13:41:52.867533] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.165 [2024-05-15 13:41:52.867547] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.165 [2024-05-15 13:41:52.867554] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.867562] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.165 [2024-05-15 13:41:52.867579] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.867587] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.867595] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.165 [2024-05-15 13:41:52.867606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.165 [2024-05-15 13:41:52.867631] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.165 [2024-05-15 13:41:52.867671] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.165 [2024-05-15 13:41:52.867685] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.165 [2024-05-15 13:41:52.867692] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.867700] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.165 [2024-05-15 13:41:52.867717] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.867725] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.867733] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.165 [2024-05-15 13:41:52.867745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.165 [2024-05-15 13:41:52.867768] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.165 [2024-05-15 13:41:52.867812] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.165 [2024-05-15 13:41:52.867825] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.165 [2024-05-15 13:41:52.867833] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.867841] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.165 [2024-05-15 13:41:52.867857] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.867864] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.867871] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.165 [2024-05-15 13:41:52.867883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.165 [2024-05-15 13:41:52.867905] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.165 [2024-05-15 13:41:52.867946] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.165 [2024-05-15 13:41:52.867959] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.165 [2024-05-15 13:41:52.867967] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.867975] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.165 [2024-05-15 13:41:52.867992] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.867999] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.868006] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.165 [2024-05-15 13:41:52.868016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.165 [2024-05-15 13:41:52.868039] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.165 [2024-05-15 13:41:52.868081] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.165 [2024-05-15 13:41:52.868095] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.165 [2024-05-15 13:41:52.868103] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.868111] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.165 [2024-05-15 13:41:52.868128] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.868136] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.868144] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.165 [2024-05-15 13:41:52.868156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.165 [2024-05-15 13:41:52.868176] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.165 [2024-05-15 13:41:52.868218] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.165 [2024-05-15 13:41:52.868231] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.165 [2024-05-15 13:41:52.868250] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.868258] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.165 [2024-05-15 13:41:52.868276] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.868284] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.868292] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.165 [2024-05-15 13:41:52.868304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.165 [2024-05-15 13:41:52.868324] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.165 [2024-05-15 13:41:52.868367] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.165 [2024-05-15 13:41:52.868378] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.165 [2024-05-15 13:41:52.868385] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.868393] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.165 [2024-05-15 13:41:52.868410] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.868418] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.868426] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.165 [2024-05-15 13:41:52.868438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.165 [2024-05-15 13:41:52.868461] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.165 [2024-05-15 13:41:52.868501] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.165 [2024-05-15 13:41:52.868512] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.165 [2024-05-15 13:41:52.868518] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.868525] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.165 [2024-05-15 13:41:52.868542] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.868550] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.165 [2024-05-15 13:41:52.868558] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.165 [2024-05-15 13:41:52.868569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.165 [2024-05-15 13:41:52.868594] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.166 [2024-05-15 13:41:52.868632] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.166 [2024-05-15 13:41:52.868643] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.166 [2024-05-15 13:41:52.868650] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.868658] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.166 [2024-05-15 13:41:52.868692] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.868701] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.868708] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.166 [2024-05-15 13:41:52.868721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.166 [2024-05-15 13:41:52.868746] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.166 [2024-05-15 13:41:52.868792] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.166 [2024-05-15 13:41:52.868807] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.166 [2024-05-15 13:41:52.868815] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.868823] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.166 [2024-05-15 13:41:52.868841] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.868849] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.868857] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.166 [2024-05-15 13:41:52.868869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.166 [2024-05-15 13:41:52.868894] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.166 [2024-05-15 13:41:52.868935] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.166 [2024-05-15 13:41:52.868949] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.166 [2024-05-15 13:41:52.868957] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.868965] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.166 [2024-05-15 13:41:52.868983] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.868992] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.869001] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.166 [2024-05-15 13:41:52.869013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.166 [2024-05-15 13:41:52.869039] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.166 [2024-05-15 13:41:52.869085] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.166 [2024-05-15 13:41:52.869099] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.166 [2024-05-15 13:41:52.869108] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.869116] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.166 [2024-05-15 13:41:52.869134] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.869143] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.869151] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.166 [2024-05-15 13:41:52.869164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.166 [2024-05-15 13:41:52.869189] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.166 [2024-05-15 13:41:52.869251] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.166 [2024-05-15 13:41:52.869266] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.166 [2024-05-15 13:41:52.869274] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.869282] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.166 [2024-05-15 13:41:52.869300] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.869308] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.869317] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.166 [2024-05-15 13:41:52.869330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.166 [2024-05-15 13:41:52.869357] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.166 [2024-05-15 13:41:52.869402] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.166 [2024-05-15 13:41:52.869416] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.166 [2024-05-15 13:41:52.869424] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.869445] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.166 [2024-05-15 13:41:52.869464] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.869473] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.869482] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.166 [2024-05-15 13:41:52.869495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.166 [2024-05-15 13:41:52.869520] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.166 [2024-05-15 13:41:52.869566] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.166 [2024-05-15 13:41:52.869581] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.166 [2024-05-15 13:41:52.869589] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.869597] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.166 [2024-05-15 13:41:52.869615] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.869624] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.869632] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.166 [2024-05-15 13:41:52.869645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.166 [2024-05-15 13:41:52.869671] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.166 [2024-05-15 13:41:52.869718] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.166 [2024-05-15 13:41:52.869732] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.166 [2024-05-15 13:41:52.869741] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.869749] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.166 [2024-05-15 13:41:52.869767] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.869775] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.869784] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.166 [2024-05-15 13:41:52.869797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.166 [2024-05-15 13:41:52.869822] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.166 [2024-05-15 13:41:52.869865] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.166 [2024-05-15 13:41:52.869879] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.166 [2024-05-15 13:41:52.869887] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.869896] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.166 [2024-05-15 13:41:52.869912] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.869920] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.869928] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.166 [2024-05-15 13:41:52.869941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.166 [2024-05-15 13:41:52.869965] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.166 [2024-05-15 13:41:52.870013] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.166 [2024-05-15 13:41:52.870028] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.166 [2024-05-15 13:41:52.870036] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.870044] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.166 [2024-05-15 13:41:52.870061] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.870068] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.870075] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.166 [2024-05-15 13:41:52.870088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.166 [2024-05-15 13:41:52.870113] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.166 [2024-05-15 13:41:52.870163] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.166 [2024-05-15 13:41:52.870177] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.166 [2024-05-15 13:41:52.870186] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.870194] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.166 [2024-05-15 13:41:52.870211] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.166 [2024-05-15 13:41:52.870218] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.870225] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.167 [2024-05-15 13:41:52.870247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.167 [2024-05-15 13:41:52.870276] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.167 [2024-05-15 13:41:52.870324] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.167 [2024-05-15 13:41:52.870338] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.167 [2024-05-15 13:41:52.870346] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.870354] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.167 [2024-05-15 13:41:52.870371] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.870378] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.870386] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.167 [2024-05-15 13:41:52.870398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.167 [2024-05-15 13:41:52.870423] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.167 [2024-05-15 13:41:52.870465] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.167 [2024-05-15 13:41:52.870480] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.167 [2024-05-15 13:41:52.870488] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.870496] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.167 [2024-05-15 13:41:52.870525] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.870532] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.870539] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.167 [2024-05-15 13:41:52.870549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.167 [2024-05-15 13:41:52.870572] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.167 [2024-05-15 13:41:52.870618] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.167 [2024-05-15 13:41:52.870630] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.167 [2024-05-15 13:41:52.870638] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.870646] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.167 [2024-05-15 13:41:52.870663] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.870670] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.870677] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.167 [2024-05-15 13:41:52.870689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.167 [2024-05-15 13:41:52.870714] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.167 [2024-05-15 13:41:52.870758] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.167 [2024-05-15 13:41:52.870771] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.167 [2024-05-15 13:41:52.870778] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.870786] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.167 [2024-05-15 13:41:52.870804] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.870813] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.870820] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.167 [2024-05-15 13:41:52.870831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.167 [2024-05-15 13:41:52.870852] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.167 [2024-05-15 13:41:52.870898] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.167 [2024-05-15 13:41:52.870909] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.167 [2024-05-15 13:41:52.870916] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.870924] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.167 [2024-05-15 13:41:52.870940] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.870947] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.870954] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.167 [2024-05-15 13:41:52.870966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.167 [2024-05-15 13:41:52.870989] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.167 [2024-05-15 13:41:52.871032] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.167 [2024-05-15 13:41:52.871044] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.167 [2024-05-15 13:41:52.871052] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.871059] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.167 [2024-05-15 13:41:52.871077] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.871086] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.871093] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.167 [2024-05-15 13:41:52.871111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.167 [2024-05-15 13:41:52.871135] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.167 [2024-05-15 13:41:52.871177] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.167 [2024-05-15 13:41:52.871186] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.167 [2024-05-15 13:41:52.871193] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.871200] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.167 [2024-05-15 13:41:52.871216] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.871225] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.871232] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.167 [2024-05-15 13:41:52.871243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.167 [2024-05-15 13:41:52.871286] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.167 [2024-05-15 13:41:52.871334] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.167 [2024-05-15 13:41:52.871347] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.167 [2024-05-15 13:41:52.871355] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.871363] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.167 [2024-05-15 13:41:52.871382] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.871391] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.871398] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.167 [2024-05-15 13:41:52.871409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.167 [2024-05-15 13:41:52.871429] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.167 [2024-05-15 13:41:52.871472] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.167 [2024-05-15 13:41:52.871487] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.167 [2024-05-15 13:41:52.871494] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.871502] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.167 [2024-05-15 13:41:52.871520] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.871528] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.871536] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.167 [2024-05-15 13:41:52.871547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.167 [2024-05-15 13:41:52.871569] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.167 [2024-05-15 13:41:52.871615] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.167 [2024-05-15 13:41:52.871627] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.167 [2024-05-15 13:41:52.871634] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.871642] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.167 [2024-05-15 13:41:52.871659] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.871667] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.871676] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.167 [2024-05-15 13:41:52.871687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.167 [2024-05-15 13:41:52.871710] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.167 [2024-05-15 13:41:52.871754] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.167 [2024-05-15 13:41:52.871765] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.167 [2024-05-15 13:41:52.871772] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.871780] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.167 [2024-05-15 13:41:52.871798] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.871805] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.871813] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.167 [2024-05-15 13:41:52.871822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.167 [2024-05-15 13:41:52.871843] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.167 [2024-05-15 13:41:52.871889] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.167 [2024-05-15 13:41:52.871901] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.167 [2024-05-15 13:41:52.871908] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.871916] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.167 [2024-05-15 13:41:52.871934] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.871941] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.167 [2024-05-15 13:41:52.871949] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.167 [2024-05-15 13:41:52.871959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.167 [2024-05-15 13:41:52.871981] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.168 [2024-05-15 13:41:52.872021] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.168 [2024-05-15 13:41:52.872033] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.168 [2024-05-15 13:41:52.872040] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.168 [2024-05-15 13:41:52.872048] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.168 [2024-05-15 13:41:52.872065] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.168 [2024-05-15 13:41:52.872073] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.168 [2024-05-15 13:41:52.872081] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.168 [2024-05-15 13:41:52.872091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.168 [2024-05-15 13:41:52.872110] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.168 [2024-05-15 13:41:52.872155] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.168 [2024-05-15 13:41:52.872169] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.168 [2024-05-15 13:41:52.872176] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.168 [2024-05-15 13:41:52.872184] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.168 [2024-05-15 13:41:52.872201] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.168 [2024-05-15 13:41:52.872209] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.168 [2024-05-15 13:41:52.872217] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a9590) 00:23:40.168 [2024-05-15 13:41:52.872226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.168 [2024-05-15 13:41:52.885317] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f0750, cid 3, qid 0 00:23:40.168 [2024-05-15 13:41:52.885384] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.168 [2024-05-15 13:41:52.885399] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.168 [2024-05-15 13:41:52.885408] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.168 [2024-05-15 13:41:52.885417] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f0750) on tqpair=0x12a9590 00:23:40.168 [2024-05-15 13:41:52.885446] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 19 milliseconds 00:23:40.168 0 Kelvin (-273 Celsius) 00:23:40.168 Available Spare: 0% 00:23:40.168 Available Spare Threshold: 0% 00:23:40.168 Life Percentage Used: 0% 00:23:40.168 Data Units Read: 0 00:23:40.168 Data Units Written: 0 00:23:40.168 Host Read Commands: 0 00:23:40.168 Host Write Commands: 0 00:23:40.168 Controller Busy Time: 0 minutes 00:23:40.168 Power Cycles: 0 00:23:40.168 Power On Hours: 0 hours 00:23:40.168 Unsafe Shutdowns: 0 00:23:40.168 Unrecoverable Media Errors: 0 00:23:40.168 Lifetime Error Log Entries: 0 00:23:40.168 Warning Temperature Time: 0 minutes 00:23:40.168 Critical Temperature Time: 0 minutes 00:23:40.168 00:23:40.168 Number of Queues 00:23:40.168 ================ 00:23:40.168 Number of I/O Submission Queues: 127 00:23:40.168 Number of I/O Completion Queues: 127 00:23:40.168 00:23:40.168 Active Namespaces 00:23:40.168 ================= 00:23:40.168 Namespace ID:1 00:23:40.168 Error Recovery Timeout: Unlimited 00:23:40.168 Command Set Identifier: NVM (00h) 00:23:40.168 Deallocate: Supported 00:23:40.168 Deallocated/Unwritten Error: Not Supported 00:23:40.168 Deallocated Read Value: Unknown 00:23:40.168 Deallocate in Write Zeroes: Not Supported 00:23:40.168 Deallocated Guard Field: 0xFFFF 00:23:40.168 Flush: Supported 00:23:40.168 Reservation: Supported 00:23:40.168 Namespace Sharing Capabilities: Multiple Controllers 00:23:40.168 Size (in LBAs): 131072 (0GiB) 00:23:40.168 Capacity (in LBAs): 131072 (0GiB) 00:23:40.168 Utilization (in LBAs): 131072 (0GiB) 00:23:40.168 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:40.168 EUI64: ABCDEF0123456789 00:23:40.168 UUID: 2be418de-445e-44a3-9a70-183c794ebf3f 00:23:40.168 Thin Provisioning: Not Supported 00:23:40.168 Per-NS Atomic Units: Yes 00:23:40.168 Atomic Boundary Size (Normal): 0 00:23:40.168 Atomic Boundary Size (PFail): 0 00:23:40.168 Atomic Boundary Offset: 0 00:23:40.168 Maximum Single Source Range Length: 65535 00:23:40.168 Maximum Copy Length: 65535 00:23:40.168 Maximum Source Range Count: 1 00:23:40.168 NGUID/EUI64 Never Reused: No 00:23:40.168 Namespace Write Protected: No 00:23:40.168 Number of LBA Formats: 1 00:23:40.168 Current LBA Format: LBA Format #00 00:23:40.168 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:40.168 00:23:40.168 13:41:52 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:40.168 13:41:52 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:40.168 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.168 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:40.168 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.168 13:41:52 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:40.168 13:41:52 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:40.168 13:41:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:40.168 13:41:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:40.168 13:41:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:40.168 13:41:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:40.168 13:41:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:40.168 13:41:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:40.168 rmmod nvme_tcp 00:23:40.168 rmmod nvme_fabrics 00:23:40.168 13:41:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:40.168 13:41:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:40.168 13:41:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:40.168 13:41:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 89626 ']' 00:23:40.168 13:41:52 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 89626 00:23:40.168 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 89626 ']' 00:23:40.168 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 89626 00:23:40.168 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:23:40.168 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:40.168 13:41:52 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89626 00:23:40.168 killing process with pid 89626 00:23:40.168 13:41:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:40.168 13:41:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:40.168 13:41:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89626' 00:23:40.168 13:41:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 89626 00:23:40.168 [2024-05-15 13:41:53.004815] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:40.168 13:41:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 89626 00:23:40.168 13:41:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:40.169 13:41:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:40.169 13:41:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:40.169 13:41:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:40.169 13:41:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:40.169 13:41:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.169 13:41:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:40.169 13:41:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.428 13:41:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:40.428 ************************************ 00:23:40.428 END TEST nvmf_identify 00:23:40.428 ************************************ 00:23:40.428 00:23:40.428 real 0m1.853s 00:23:40.428 user 0m4.129s 00:23:40.428 sys 0m0.656s 00:23:40.428 13:41:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:40.428 13:41:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:40.428 13:41:53 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:40.428 13:41:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:40.428 13:41:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:40.428 13:41:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:40.428 ************************************ 00:23:40.428 START TEST nvmf_perf 00:23:40.428 ************************************ 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:40.428 * Looking for test storage... 00:23:40.428 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:40.428 13:41:53 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:40.429 Cannot find device "nvmf_tgt_br" 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:40.429 Cannot find device "nvmf_tgt_br2" 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:40.429 Cannot find device "nvmf_tgt_br" 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:23:40.429 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:40.429 Cannot find device "nvmf_tgt_br2" 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:40.687 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:40.687 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:40.687 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:40.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:23:40.945 00:23:40.945 --- 10.0.0.2 ping statistics --- 00:23:40.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.945 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:40.945 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:40.945 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:23:40.945 00:23:40.945 --- 10.0.0.3 ping statistics --- 00:23:40.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.945 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:40.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:23:40.945 00:23:40.945 --- 10.0.0.1 ping statistics --- 00:23:40.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.945 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=89825 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 89825 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 89825 ']' 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:40.945 13:41:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:40.945 [2024-05-15 13:41:53.887293] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:23:40.945 [2024-05-15 13:41:53.887554] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.945 [2024-05-15 13:41:54.009143] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:40.945 [2024-05-15 13:41:54.027069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:41.203 [2024-05-15 13:41:54.090312] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.203 [2024-05-15 13:41:54.090581] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.203 [2024-05-15 13:41:54.090817] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.203 [2024-05-15 13:41:54.091011] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.203 [2024-05-15 13:41:54.091175] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.203 [2024-05-15 13:41:54.091362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.203 [2024-05-15 13:41:54.091497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.203 [2024-05-15 13:41:54.091557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.203 [2024-05-15 13:41:54.091551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:41.203 13:41:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:41.203 13:41:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:23:41.203 13:41:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:41.203 13:41:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:41.203 13:41:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:41.203 13:41:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.203 13:41:54 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:41.203 13:41:54 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:23:41.800 13:41:54 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:23:41.800 13:41:54 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:42.058 13:41:54 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:23:42.058 13:41:54 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:42.316 13:41:55 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:42.316 13:41:55 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:23:42.316 13:41:55 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:42.316 13:41:55 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:42.316 13:41:55 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:42.574 [2024-05-15 13:41:55.589823] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:42.574 13:41:55 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:42.833 13:41:55 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:42.833 13:41:55 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:43.092 13:41:56 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:43.092 13:41:56 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:43.350 13:41:56 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:43.608 [2024-05-15 13:41:56.500165] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:43.608 [2024-05-15 13:41:56.500927] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.608 13:41:56 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:43.866 13:41:56 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:43.866 13:41:56 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:43.866 13:41:56 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:43.866 13:41:56 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:44.800 Initializing NVMe Controllers 00:23:44.800 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:44.800 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:23:44.800 Initialization complete. Launching workers. 00:23:44.800 ======================================================== 00:23:44.800 Latency(us) 00:23:44.800 Device Information : IOPS MiB/s Average min max 00:23:44.800 PCIE (0000:00:10.0) NSID 1 from core 0: 22848.00 89.25 1400.61 297.53 19653.38 00:23:44.800 ======================================================== 00:23:44.800 Total : 22848.00 89.25 1400.61 297.53 19653.38 00:23:44.800 00:23:44.800 13:41:57 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:46.250 Initializing NVMe Controllers 00:23:46.250 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:46.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:46.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:46.250 Initialization complete. Launching workers. 00:23:46.250 ======================================================== 00:23:46.250 Latency(us) 00:23:46.250 Device Information : IOPS MiB/s Average min max 00:23:46.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4226.98 16.51 236.31 84.93 16083.27 00:23:46.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 41.00 0.16 24657.98 1444.49 37972.36 00:23:46.251 ======================================================== 00:23:46.251 Total : 4267.98 16.67 470.92 84.93 37972.36 00:23:46.251 00:23:46.251 13:41:59 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:48.149 Initializing NVMe Controllers 00:23:48.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:48.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:48.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:48.149 Initialization complete. Launching workers. 00:23:48.149 ======================================================== 00:23:48.149 Latency(us) 00:23:48.149 Device Information : IOPS MiB/s Average min max 00:23:48.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9050.16 35.35 3537.33 553.03 21847.58 00:23:48.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1279.90 5.00 25312.02 13356.49 32211.42 00:23:48.149 ======================================================== 00:23:48.149 Total : 10330.06 40.35 6235.22 553.03 32211.42 00:23:48.149 00:23:48.149 13:42:00 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:23:48.149 13:42:00 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:50.847 Initializing NVMe Controllers 00:23:50.847 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:50.847 Controller IO queue size 128, less than required. 00:23:50.847 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.847 Controller IO queue size 128, less than required. 00:23:50.847 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:50.847 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:50.847 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:50.847 Initialization complete. Launching workers. 00:23:50.847 ======================================================== 00:23:50.847 Latency(us) 00:23:50.847 Device Information : IOPS MiB/s Average min max 00:23:50.847 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1855.35 463.84 69641.56 27753.21 125889.20 00:23:50.847 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 253.36 63.34 591993.64 253237.00 969264.23 00:23:50.847 ======================================================== 00:23:50.847 Total : 2108.71 527.18 132402.88 27753.21 969264.23 00:23:50.847 00:23:50.847 13:42:03 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:51.105 Initializing NVMe Controllers 00:23:51.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:51.105 Controller IO queue size 128, less than required. 00:23:51.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:51.105 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:51.105 Controller IO queue size 128, less than required. 00:23:51.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:51.105 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:23:51.105 WARNING: Some requested NVMe devices were skipped 00:23:51.105 No valid NVMe controllers or AIO or URING devices found 00:23:51.105 13:42:04 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:54.385 Initializing NVMe Controllers 00:23:54.385 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:54.385 Controller IO queue size 128, less than required. 00:23:54.385 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:54.385 Controller IO queue size 128, less than required. 00:23:54.385 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:54.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:54.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:54.385 Initialization complete. Launching workers. 00:23:54.385 00:23:54.385 ==================== 00:23:54.385 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:54.385 TCP transport: 00:23:54.385 polls: 19912 00:23:54.385 idle_polls: 0 00:23:54.385 sock_completions: 19912 00:23:54.385 nvme_completions: 6713 00:23:54.385 submitted_requests: 10056 00:23:54.385 queued_requests: 1 00:23:54.385 00:23:54.385 ==================== 00:23:54.385 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:54.385 TCP transport: 00:23:54.385 polls: 18710 00:23:54.385 idle_polls: 0 00:23:54.385 sock_completions: 18710 00:23:54.385 nvme_completions: 6163 00:23:54.386 submitted_requests: 9268 00:23:54.386 queued_requests: 1 00:23:54.386 ======================================================== 00:23:54.386 Latency(us) 00:23:54.386 Device Information : IOPS MiB/s Average min max 00:23:54.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1664.52 416.13 77615.25 54227.09 152778.34 00:23:54.386 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1528.13 382.03 84897.84 38346.03 135856.98 00:23:54.386 ======================================================== 00:23:54.386 Total : 3192.65 798.16 81100.98 38346.03 152778.34 00:23:54.386 00:23:54.386 13:42:07 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:54.386 13:42:07 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:54.386 13:42:07 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:23:54.386 13:42:07 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:23:54.386 13:42:07 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:23:54.643 13:42:07 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=4254bf5a-d388-4680-a555-cc9ed46dd476 00:23:54.643 13:42:07 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 4254bf5a-d388-4680-a555-cc9ed46dd476 00:23:54.643 13:42:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=4254bf5a-d388-4680-a555-cc9ed46dd476 00:23:54.643 13:42:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:23:54.643 13:42:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:23:54.643 13:42:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:23:54.643 13:42:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:54.901 13:42:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:23:54.901 { 00:23:54.901 "uuid": "4254bf5a-d388-4680-a555-cc9ed46dd476", 00:23:54.901 "name": "lvs_0", 00:23:54.901 "base_bdev": "Nvme0n1", 00:23:54.901 "total_data_clusters": 1278, 00:23:54.901 "free_clusters": 1278, 00:23:54.901 "block_size": 4096, 00:23:54.901 "cluster_size": 4194304 00:23:54.901 } 00:23:54.901 ]' 00:23:54.901 13:42:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="4254bf5a-d388-4680-a555-cc9ed46dd476") .free_clusters' 00:23:54.901 13:42:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=1278 00:23:54.901 13:42:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="4254bf5a-d388-4680-a555-cc9ed46dd476") .cluster_size' 00:23:54.901 13:42:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:23:54.901 13:42:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=5112 00:23:54.901 13:42:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 5112 00:23:54.901 5112 00:23:54.901 13:42:07 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:23:54.901 13:42:07 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4254bf5a-d388-4680-a555-cc9ed46dd476 lbd_0 5112 00:23:55.159 13:42:08 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=724916e6-d1f8-4868-b0a6-2fdbf6092cb8 00:23:55.159 13:42:08 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 724916e6-d1f8-4868-b0a6-2fdbf6092cb8 lvs_n_0 00:23:55.726 13:42:08 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=253c4e7b-63c5-49df-8f43-92297ffc962a 00:23:55.726 13:42:08 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 253c4e7b-63c5-49df-8f43-92297ffc962a 00:23:55.726 13:42:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=253c4e7b-63c5-49df-8f43-92297ffc962a 00:23:55.726 13:42:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:23:55.726 13:42:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:23:55.726 13:42:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:23:55.726 13:42:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:55.726 13:42:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:23:55.726 { 00:23:55.726 "uuid": "4254bf5a-d388-4680-a555-cc9ed46dd476", 00:23:55.726 "name": "lvs_0", 00:23:55.726 "base_bdev": "Nvme0n1", 00:23:55.726 "total_data_clusters": 1278, 00:23:55.726 "free_clusters": 0, 00:23:55.726 "block_size": 4096, 00:23:55.726 "cluster_size": 4194304 00:23:55.726 }, 00:23:55.726 { 00:23:55.726 "uuid": "253c4e7b-63c5-49df-8f43-92297ffc962a", 00:23:55.726 "name": "lvs_n_0", 00:23:55.726 "base_bdev": "724916e6-d1f8-4868-b0a6-2fdbf6092cb8", 00:23:55.726 "total_data_clusters": 1276, 00:23:55.726 "free_clusters": 1276, 00:23:55.726 "block_size": 4096, 00:23:55.726 "cluster_size": 4194304 00:23:55.726 } 00:23:55.726 ]' 00:23:55.726 13:42:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="253c4e7b-63c5-49df-8f43-92297ffc962a") .free_clusters' 00:23:55.983 13:42:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=1276 00:23:55.983 13:42:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="253c4e7b-63c5-49df-8f43-92297ffc962a") .cluster_size' 00:23:55.983 13:42:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:23:55.983 13:42:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=5104 00:23:55.983 13:42:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 5104 00:23:55.983 5104 00:23:55.983 13:42:08 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:23:55.984 13:42:08 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 253c4e7b-63c5-49df-8f43-92297ffc962a lbd_nest_0 5104 00:23:56.241 13:42:09 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=a2545284-baf8-4570-ae1e-3305b94379ad 00:23:56.241 13:42:09 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:56.241 13:42:09 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:23:56.241 13:42:09 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 a2545284-baf8-4570-ae1e-3305b94379ad 00:23:56.499 13:42:09 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:56.757 13:42:09 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:23:56.757 13:42:09 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:23:56.757 13:42:09 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:56.757 13:42:09 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:56.757 13:42:09 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:57.324 Initializing NVMe Controllers 00:23:57.324 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:57.324 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:23:57.324 WARNING: Some requested NVMe devices were skipped 00:23:57.324 No valid NVMe controllers or AIO or URING devices found 00:23:57.582 13:42:10 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:57.582 13:42:10 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:09.814 Initializing NVMe Controllers 00:24:09.814 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:09.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:09.814 Initialization complete. Launching workers. 00:24:09.814 ======================================================== 00:24:09.814 Latency(us) 00:24:09.814 Device Information : IOPS MiB/s Average min max 00:24:09.814 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1095.80 136.97 911.10 282.85 23235.32 00:24:09.814 ======================================================== 00:24:09.814 Total : 1095.80 136.97 911.10 282.85 23235.32 00:24:09.814 00:24:09.814 13:42:20 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:24:09.814 13:42:20 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:09.814 13:42:20 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:09.814 Initializing NVMe Controllers 00:24:09.814 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:09.814 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:24:09.814 WARNING: Some requested NVMe devices were skipped 00:24:09.814 No valid NVMe controllers or AIO or URING devices found 00:24:09.814 13:42:21 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:09.814 13:42:21 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:19.780 Initializing NVMe Controllers 00:24:19.780 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:19.780 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:19.780 Initialization complete. Launching workers. 00:24:19.780 ======================================================== 00:24:19.780 Latency(us) 00:24:19.780 Device Information : IOPS MiB/s Average min max 00:24:19.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 384.47 48.06 83552.69 23305.67 181961.81 00:24:19.780 ======================================================== 00:24:19.780 Total : 384.47 48.06 83552.69 23305.67 181961.81 00:24:19.780 00:24:19.780 13:42:31 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:24:19.780 13:42:31 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:19.780 13:42:31 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:19.780 Initializing NVMe Controllers 00:24:19.780 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:19.780 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:24:19.780 WARNING: Some requested NVMe devices were skipped 00:24:19.780 No valid NVMe controllers or AIO or URING devices found 00:24:19.780 13:42:32 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:24:19.780 13:42:32 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:29.754 Initializing NVMe Controllers 00:24:29.754 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:29.754 Controller IO queue size 128, less than required. 00:24:29.754 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:29.754 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:29.754 Initialization complete. Launching workers. 00:24:29.754 ======================================================== 00:24:29.754 Latency(us) 00:24:29.754 Device Information : IOPS MiB/s Average min max 00:24:29.755 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3398.10 424.76 37722.55 10796.92 86983.31 00:24:29.755 ======================================================== 00:24:29.755 Total : 3398.10 424.76 37722.55 10796.92 86983.31 00:24:29.755 00:24:30.020 13:42:42 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:30.279 13:42:43 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a2545284-baf8-4570-ae1e-3305b94379ad 00:24:30.536 13:42:43 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:24:30.793 13:42:43 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 724916e6-d1f8-4868-b0a6-2fdbf6092cb8 00:24:31.049 13:42:44 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:24:31.308 13:42:44 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:31.308 13:42:44 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:31.308 13:42:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:31.308 13:42:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:31.308 13:42:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:31.308 13:42:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:31.308 13:42:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:31.308 13:42:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:31.308 rmmod nvme_tcp 00:24:31.308 rmmod nvme_fabrics 00:24:31.308 13:42:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:31.308 13:42:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:31.308 13:42:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:31.308 13:42:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 89825 ']' 00:24:31.308 13:42:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 89825 00:24:31.308 13:42:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 89825 ']' 00:24:31.308 13:42:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 89825 00:24:31.308 13:42:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:24:31.308 13:42:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:31.308 13:42:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89825 00:24:31.308 killing process with pid 89825 00:24:31.308 13:42:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:31.308 13:42:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:31.308 13:42:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89825' 00:24:31.308 13:42:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 89825 00:24:31.308 [2024-05-15 13:42:44.350870] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:31.308 13:42:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 89825 00:24:32.680 13:42:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:32.680 13:42:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:32.680 13:42:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:32.680 13:42:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:32.680 13:42:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:32.680 13:42:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.680 13:42:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:32.680 13:42:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.680 13:42:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:32.680 ************************************ 00:24:32.680 END TEST nvmf_perf 00:24:32.680 ************************************ 00:24:32.680 00:24:32.680 real 0m52.444s 00:24:32.680 user 3m15.377s 00:24:32.680 sys 0m15.345s 00:24:32.680 13:42:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:32.680 13:42:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:32.938 13:42:45 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:32.938 13:42:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:32.938 13:42:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:32.938 13:42:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:32.938 ************************************ 00:24:32.938 START TEST nvmf_fio_host 00:24:32.938 ************************************ 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:32.938 * Looking for test storage... 00:24:32.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:32.938 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:32.939 13:42:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:32.939 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:32.939 Cannot find device "nvmf_tgt_br" 00:24:32.939 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:24:32.939 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:33.196 Cannot find device "nvmf_tgt_br2" 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:33.196 Cannot find device "nvmf_tgt_br" 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:33.196 Cannot find device "nvmf_tgt_br2" 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:33.196 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:33.196 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:33.196 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:33.453 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.453 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:24:33.453 00:24:33.453 --- 10.0.0.2 ping statistics --- 00:24:33.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.453 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:33.453 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:33.453 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:24:33.453 00:24:33.453 --- 10.0.0.3 ping statistics --- 00:24:33.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.453 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:33.453 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.453 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:24:33.453 00:24:33.453 --- 10.0.0.1 ping statistics --- 00:24:33.453 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.453 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=90645 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 90645 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 90645 ']' 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:33.453 13:42:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.453 [2024-05-15 13:42:46.425622] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:24:33.453 [2024-05-15 13:42:46.425930] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.711 [2024-05-15 13:42:46.572108] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:33.711 [2024-05-15 13:42:46.588026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:33.711 [2024-05-15 13:42:46.641000] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.711 [2024-05-15 13:42:46.641387] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.711 [2024-05-15 13:42:46.641643] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.711 [2024-05-15 13:42:46.641838] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.711 [2024-05-15 13:42:46.641936] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.711 [2024-05-15 13:42:46.642232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.711 [2024-05-15 13:42:46.642302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.711 [2024-05-15 13:42:46.642743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.711 [2024-05-15 13:42:46.643354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:34.277 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:34.277 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:24:34.277 13:42:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:34.277 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.277 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.277 [2024-05-15 13:42:47.356473] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.277 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.277 13:42:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:24:34.277 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:34.277 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.535 13:42:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:34.535 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.535 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.535 Malloc1 00:24:34.535 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.535 13:42:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:34.535 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.536 [2024-05-15 13:42:47.460436] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:34.536 [2024-05-15 13:42:47.461251] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:34.536 13:42:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:34.794 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:34.794 fio-3.35 00:24:34.794 Starting 1 thread 00:24:37.317 00:24:37.317 test: (groupid=0, jobs=1): err= 0: pid=90710: Wed May 15 13:42:49 2024 00:24:37.317 read: IOPS=9031, BW=35.3MiB/s (37.0MB/s)(70.8MiB/2006msec) 00:24:37.317 slat (nsec): min=1780, max=188468, avg=2111.97, stdev=1940.97 00:24:37.317 clat (usec): min=1756, max=13006, avg=7368.27, stdev=960.81 00:24:37.317 lat (usec): min=1778, max=13008, avg=7370.38, stdev=960.79 00:24:37.317 clat percentiles (usec): 00:24:37.317 | 1.00th=[ 6063], 5.00th=[ 6390], 10.00th=[ 6521], 20.00th=[ 6718], 00:24:37.317 | 30.00th=[ 6915], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7242], 00:24:37.317 | 70.00th=[ 7439], 80.00th=[ 7701], 90.00th=[ 8979], 95.00th=[ 9634], 00:24:37.317 | 99.00th=[10421], 99.50th=[10683], 99.90th=[11600], 99.95th=[12780], 00:24:37.317 | 99.99th=[13042] 00:24:37.317 bw ( KiB/s): min=30664, max=38192, per=99.90%, avg=36090.00, stdev=3624.60, samples=4 00:24:37.317 iops : min= 7666, max= 9548, avg=9022.50, stdev=906.15, samples=4 00:24:37.317 write: IOPS=9048, BW=35.3MiB/s (37.1MB/s)(70.9MiB/2006msec); 0 zone resets 00:24:37.317 slat (nsec): min=1833, max=109316, avg=2212.36, stdev=1374.31 00:24:37.317 clat (usec): min=1162, max=12270, avg=6696.66, stdev=875.05 00:24:37.317 lat (usec): min=1168, max=12272, avg=6698.87, stdev=875.08 00:24:37.317 clat percentiles (usec): 00:24:37.317 | 1.00th=[ 5473], 5.00th=[ 5800], 10.00th=[ 5932], 20.00th=[ 6128], 00:24:37.317 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6456], 60.00th=[ 6587], 00:24:37.317 | 70.00th=[ 6718], 80.00th=[ 6980], 90.00th=[ 8225], 95.00th=[ 8848], 00:24:37.317 | 99.00th=[ 9372], 99.50th=[ 9503], 99.90th=[10159], 99.95th=[11207], 00:24:37.317 | 99.99th=[12256] 00:24:37.317 bw ( KiB/s): min=31624, max=37928, per=100.00%, avg=36194.00, stdev=3060.66, samples=4 00:24:37.317 iops : min= 7906, max= 9482, avg=9048.50, stdev=765.16, samples=4 00:24:37.317 lat (msec) : 2=0.04%, 4=0.07%, 10=98.47%, 20=1.41% 00:24:37.317 cpu : usr=70.62%, sys=23.89%, ctx=115, majf=0, minf=3 00:24:37.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:37.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:37.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:37.317 issued rwts: total=18117,18152,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:37.317 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:37.317 00:24:37.317 Run status group 0 (all jobs): 00:24:37.317 READ: bw=35.3MiB/s (37.0MB/s), 35.3MiB/s-35.3MiB/s (37.0MB/s-37.0MB/s), io=70.8MiB (74.2MB), run=2006-2006msec 00:24:37.317 WRITE: bw=35.3MiB/s (37.1MB/s), 35.3MiB/s-35.3MiB/s (37.1MB/s-37.1MB/s), io=70.9MiB (74.3MB), run=2006-2006msec 00:24:37.317 13:42:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:37.317 13:42:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:37.317 13:42:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:37.317 13:42:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:37.317 13:42:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:37.317 13:42:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:37.317 13:42:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:24:37.317 13:42:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:37.317 13:42:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:37.317 13:42:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:37.317 13:42:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:37.317 13:42:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:24:37.317 13:42:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:37.317 13:42:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:37.317 13:42:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:37.317 13:42:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:37.317 13:42:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:37.317 13:42:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:37.317 13:42:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:37.317 13:42:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:37.317 13:42:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:37.317 13:42:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:37.317 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:37.317 fio-3.35 00:24:37.317 Starting 1 thread 00:24:39.844 00:24:39.844 test: (groupid=0, jobs=1): err= 0: pid=90754: Wed May 15 13:42:52 2024 00:24:39.844 read: IOPS=9054, BW=141MiB/s (148MB/s)(284MiB/2007msec) 00:24:39.844 slat (usec): min=2, max=156, avg= 3.34, stdev= 1.93 00:24:39.844 clat (usec): min=2115, max=16932, avg=7949.57, stdev=2437.75 00:24:39.844 lat (usec): min=2118, max=16936, avg=7952.91, stdev=2437.87 00:24:39.844 clat percentiles (usec): 00:24:39.844 | 1.00th=[ 3818], 5.00th=[ 4490], 10.00th=[ 4948], 20.00th=[ 5735], 00:24:39.844 | 30.00th=[ 6456], 40.00th=[ 7046], 50.00th=[ 7635], 60.00th=[ 8291], 00:24:39.844 | 70.00th=[ 9110], 80.00th=[10028], 90.00th=[11338], 95.00th=[12518], 00:24:39.844 | 99.00th=[14222], 99.50th=[14746], 99.90th=[16319], 99.95th=[16909], 00:24:39.844 | 99.99th=[16909] 00:24:39.844 bw ( KiB/s): min=63808, max=79776, per=49.43%, avg=71608.00, stdev=6579.11, samples=4 00:24:39.844 iops : min= 3988, max= 4986, avg=4475.50, stdev=411.19, samples=4 00:24:39.844 write: IOPS=5227, BW=81.7MiB/s (85.7MB/s)(146MiB/1790msec); 0 zone resets 00:24:39.844 slat (usec): min=32, max=211, avg=36.07, stdev= 5.06 00:24:39.844 clat (usec): min=2526, max=18635, avg=11057.07, stdev=1978.95 00:24:39.844 lat (usec): min=2562, max=18673, avg=11093.13, stdev=1978.94 00:24:39.844 clat percentiles (usec): 00:24:39.844 | 1.00th=[ 7046], 5.00th=[ 8225], 10.00th=[ 8848], 20.00th=[ 9372], 00:24:39.844 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10814], 60.00th=[11338], 00:24:39.844 | 70.00th=[11994], 80.00th=[12649], 90.00th=[13829], 95.00th=[14746], 00:24:39.844 | 99.00th=[16319], 99.50th=[16909], 99.90th=[17695], 99.95th=[18220], 00:24:39.844 | 99.99th=[18744] 00:24:39.844 bw ( KiB/s): min=66944, max=82240, per=88.77%, avg=74256.00, stdev=6293.38, samples=4 00:24:39.844 iops : min= 4184, max= 5140, avg=4641.00, stdev=393.34, samples=4 00:24:39.844 lat (msec) : 4=1.07%, 10=62.60%, 20=36.33% 00:24:39.844 cpu : usr=80.31%, sys=15.75%, ctx=66, majf=0, minf=8 00:24:39.844 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:24:39.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:39.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:39.844 issued rwts: total=18172,9358,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:39.844 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:39.844 00:24:39.844 Run status group 0 (all jobs): 00:24:39.844 READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=284MiB (298MB), run=2007-2007msec 00:24:39.844 WRITE: bw=81.7MiB/s (85.7MB/s), 81.7MiB/s-81.7MiB/s (85.7MB/s-85.7MB/s), io=146MiB (153MB), run=1790-1790msec 00:24:39.844 13:42:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:39.844 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.844 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.844 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.844 13:42:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:24:39.844 13:42:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:24:39.844 13:42:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # get_nvme_bdfs 00:24:39.844 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:24:39.844 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:24:39.844 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:39.844 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:24:39.844 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.845 Nvme0n1 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # ls_guid=a26b9690-aefa-42d6-ad1f-e34dff69a420 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # get_lvs_free_mb a26b9690-aefa-42d6-ad1f-e34dff69a420 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=a26b9690-aefa-42d6-ad1f-e34dff69a420 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # rpc_cmd bdev_lvol_get_lvstores 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:24:39.845 { 00:24:39.845 "uuid": "a26b9690-aefa-42d6-ad1f-e34dff69a420", 00:24:39.845 "name": "lvs_0", 00:24:39.845 "base_bdev": "Nvme0n1", 00:24:39.845 "total_data_clusters": 4, 00:24:39.845 "free_clusters": 4, 00:24:39.845 "block_size": 4096, 00:24:39.845 "cluster_size": 1073741824 00:24:39.845 } 00:24:39.845 ]' 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="a26b9690-aefa-42d6-ad1f-e34dff69a420") .free_clusters' 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=4 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="a26b9690-aefa-42d6-ad1f-e34dff69a420") .cluster_size' 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=4096 00:24:39.845 4096 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 4096 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 4096 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.845 2d1c0747-c5c1-40cf-9de8-461ce24c279e 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:39.845 13:42:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:40.103 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:40.103 fio-3.35 00:24:40.103 Starting 1 thread 00:24:42.630 00:24:42.630 test: (groupid=0, jobs=1): err= 0: pid=90833: Wed May 15 13:42:55 2024 00:24:42.630 read: IOPS=6412, BW=25.0MiB/s (26.3MB/s)(50.3MiB/2008msec) 00:24:42.630 slat (nsec): min=1644, max=509449, avg=2381.99, stdev=5561.52 00:24:42.630 clat (usec): min=3368, max=19852, avg=10446.86, stdev=1318.82 00:24:42.630 lat (usec): min=3424, max=19854, avg=10449.24, stdev=1318.41 00:24:42.630 clat percentiles (usec): 00:24:42.630 | 1.00th=[ 8356], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9503], 00:24:42.630 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:24:42.630 | 70.00th=[10683], 80.00th=[11076], 90.00th=[12256], 95.00th=[13173], 00:24:42.630 | 99.00th=[14746], 99.50th=[15795], 99.90th=[16909], 99.95th=[17171], 00:24:42.630 | 99.99th=[17433] 00:24:42.630 bw ( KiB/s): min=24136, max=26488, per=99.85%, avg=25610.00, stdev=1020.42, samples=4 00:24:42.630 iops : min= 6034, max= 6622, avg=6402.50, stdev=255.10, samples=4 00:24:42.630 write: IOPS=6410, BW=25.0MiB/s (26.3MB/s)(50.3MiB/2008msec); 0 zone resets 00:24:42.630 slat (nsec): min=1707, max=375739, avg=2500.37, stdev=3657.17 00:24:42.630 clat (usec): min=3273, max=17102, avg=9429.73, stdev=1202.51 00:24:42.630 lat (usec): min=3291, max=17104, avg=9432.23, stdev=1202.34 00:24:42.630 clat percentiles (usec): 00:24:42.630 | 1.00th=[ 7504], 5.00th=[ 8029], 10.00th=[ 8225], 20.00th=[ 8586], 00:24:42.630 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9503], 00:24:42.630 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10945], 95.00th=[11863], 00:24:42.630 | 99.00th=[13304], 99.50th=[14615], 99.90th=[16057], 99.95th=[16188], 00:24:42.630 | 99.99th=[16319] 00:24:42.631 bw ( KiB/s): min=23744, max=26768, per=99.96%, avg=25634.00, stdev=1421.01, samples=4 00:24:42.631 iops : min= 5936, max= 6692, avg=6408.50, stdev=355.25, samples=4 00:24:42.631 lat (msec) : 4=0.09%, 10=59.40%, 20=40.51% 00:24:42.631 cpu : usr=72.25%, sys=23.62%, ctx=7, majf=0, minf=12 00:24:42.631 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:42.631 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:42.631 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:42.631 issued rwts: total=12876,12873,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:42.631 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:42.631 00:24:42.631 Run status group 0 (all jobs): 00:24:42.631 READ: bw=25.0MiB/s (26.3MB/s), 25.0MiB/s-25.0MiB/s (26.3MB/s-26.3MB/s), io=50.3MiB (52.7MB), run=2008-2008msec 00:24:42.631 WRITE: bw=25.0MiB/s (26.3MB/s), 25.0MiB/s-25.0MiB/s (26.3MB/s-26.3MB/s), io=50.3MiB (52.7MB), run=2008-2008msec 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@62 -- # ls_nested_guid=5a2eba20-e98c-426a-ab6d-8e0c782b5730 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@63 -- # get_lvs_free_mb 5a2eba20-e98c-426a-ab6d-8e0c782b5730 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=5a2eba20-e98c-426a-ab6d-8e0c782b5730 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # rpc_cmd bdev_lvol_get_lvstores 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:24:42.631 { 00:24:42.631 "uuid": "a26b9690-aefa-42d6-ad1f-e34dff69a420", 00:24:42.631 "name": "lvs_0", 00:24:42.631 "base_bdev": "Nvme0n1", 00:24:42.631 "total_data_clusters": 4, 00:24:42.631 "free_clusters": 0, 00:24:42.631 "block_size": 4096, 00:24:42.631 "cluster_size": 1073741824 00:24:42.631 }, 00:24:42.631 { 00:24:42.631 "uuid": "5a2eba20-e98c-426a-ab6d-8e0c782b5730", 00:24:42.631 "name": "lvs_n_0", 00:24:42.631 "base_bdev": "2d1c0747-c5c1-40cf-9de8-461ce24c279e", 00:24:42.631 "total_data_clusters": 1022, 00:24:42.631 "free_clusters": 1022, 00:24:42.631 "block_size": 4096, 00:24:42.631 "cluster_size": 4194304 00:24:42.631 } 00:24:42.631 ]' 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="5a2eba20-e98c-426a-ab6d-8e0c782b5730") .free_clusters' 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=1022 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="5a2eba20-e98c-426a-ab6d-8e0c782b5730") .cluster_size' 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:24:42.631 4088 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=4088 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 4088 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.631 41ea7e55-4c67-477b-b338-6b0be68a8243 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:42.631 13:42:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:42.631 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:42.631 fio-3.35 00:24:42.631 Starting 1 thread 00:24:45.158 00:24:45.158 test: (groupid=0, jobs=1): err= 0: pid=90888: Wed May 15 13:42:57 2024 00:24:45.158 read: IOPS=5702, BW=22.3MiB/s (23.4MB/s)(44.8MiB/2010msec) 00:24:45.158 slat (nsec): min=1682, max=290116, avg=2510.36, stdev=3917.17 00:24:45.158 clat (usec): min=2904, max=21759, avg=11775.44, stdev=1675.76 00:24:45.158 lat (usec): min=2912, max=21761, avg=11777.95, stdev=1675.49 00:24:45.158 clat percentiles (usec): 00:24:45.158 | 1.00th=[ 9241], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10552], 00:24:45.158 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:24:45.158 | 70.00th=[11994], 80.00th=[12649], 90.00th=[14353], 95.00th=[15270], 00:24:45.158 | 99.00th=[16712], 99.50th=[17957], 99.90th=[20841], 99.95th=[21103], 00:24:45.158 | 99.99th=[21627] 00:24:45.158 bw ( KiB/s): min=20120, max=24256, per=100.00%, avg=22816.00, stdev=1837.71, samples=4 00:24:45.158 iops : min= 5030, max= 6062, avg=5703.50, stdev=458.91, samples=4 00:24:45.158 write: IOPS=5684, BW=22.2MiB/s (23.3MB/s)(44.6MiB/2010msec); 0 zone resets 00:24:45.158 slat (nsec): min=1762, max=218559, avg=2649.92, stdev=2473.43 00:24:45.158 clat (usec): min=2188, max=20218, avg=10620.29, stdev=1520.73 00:24:45.158 lat (usec): min=2200, max=20220, avg=10622.94, stdev=1520.50 00:24:45.158 clat percentiles (usec): 00:24:45.158 | 1.00th=[ 8291], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:24:45.158 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10290], 60.00th=[10552], 00:24:45.158 | 70.00th=[10814], 80.00th=[11469], 90.00th=[12780], 95.00th=[13698], 00:24:45.158 | 99.00th=[15401], 99.50th=[16581], 99.90th=[18744], 99.95th=[19530], 00:24:45.158 | 99.99th=[20055] 00:24:45.159 bw ( KiB/s): min=19344, max=24504, per=99.87%, avg=22706.00, stdev=2294.32, samples=4 00:24:45.159 iops : min= 4836, max= 6126, avg=5676.50, stdev=573.58, samples=4 00:24:45.159 lat (msec) : 4=0.07%, 10=21.62%, 20=78.21%, 50=0.09% 00:24:45.159 cpu : usr=73.12%, sys=22.75%, ctx=453, majf=0, minf=12 00:24:45.159 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:24:45.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:45.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:45.159 issued rwts: total=11463,11425,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:45.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:45.159 00:24:45.159 Run status group 0 (all jobs): 00:24:45.159 READ: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=44.8MiB (47.0MB), run=2010-2010msec 00:24:45.159 WRITE: bw=22.2MiB/s (23.3MB/s), 22.2MiB/s-22.2MiB/s (23.3MB/s-23.3MB/s), io=44.6MiB (46.8MB), run=2010-2010msec 00:24:45.159 13:42:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:45.159 13:42:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.159 13:42:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.159 13:42:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.159 13:42:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # sync 00:24:45.159 13:42:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:24:45.159 13:42:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.159 13:42:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.159 13:42:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.159 13:42:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:24:45.159 13:42:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.159 13:42:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.159 13:42:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.159 13:42:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:24:45.159 13:42:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.159 13:42:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.159 13:42:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.159 13:42:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:24:45.159 13:42:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.159 13:42:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.159 13:42:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.159 13:42:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:24:45.159 13:42:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.159 13:42:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:45.723 rmmod nvme_tcp 00:24:45.723 rmmod nvme_fabrics 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 90645 ']' 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 90645 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 90645 ']' 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 90645 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90645 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:45.723 killing process with pid 90645 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90645' 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 90645 00:24:45.723 13:42:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 90645 00:24:45.723 [2024-05-15 13:42:58.786539] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:45.980 13:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:45.980 13:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:45.980 13:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:45.980 13:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:45.980 13:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:45.980 13:42:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.980 13:42:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.980 13:42:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.980 13:42:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:45.980 00:24:45.980 real 0m13.213s 00:24:45.980 user 0m54.465s 00:24:45.980 sys 0m4.219s 00:24:45.980 13:42:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:45.980 13:42:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.980 ************************************ 00:24:45.980 END TEST nvmf_fio_host 00:24:45.980 ************************************ 00:24:45.980 13:42:59 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:45.980 13:42:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:45.980 13:42:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:45.980 13:42:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:46.238 ************************************ 00:24:46.238 START TEST nvmf_failover 00:24:46.238 ************************************ 00:24:46.238 13:42:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:46.238 * Looking for test storage... 00:24:46.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:46.238 13:42:59 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:46.238 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:46.238 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:46.238 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:46.238 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:46.238 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:46.238 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:46.238 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:46.238 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:46.238 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:46.238 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:46.238 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:46.238 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:24:46.238 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:24:46.238 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:46.238 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:46.238 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:46.238 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:46.238 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:46.238 13:42:59 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.238 13:42:59 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.238 13:42:59 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:46.239 Cannot find device "nvmf_tgt_br" 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:46.239 Cannot find device "nvmf_tgt_br2" 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:46.239 Cannot find device "nvmf_tgt_br" 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:46.239 Cannot find device "nvmf_tgt_br2" 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:46.239 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:46.509 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:46.509 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:46.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:46.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:24:46.509 00:24:46.509 --- 10.0.0.2 ping statistics --- 00:24:46.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.509 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:46.509 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:46.509 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:24:46.509 00:24:46.509 --- 10.0.0.3 ping statistics --- 00:24:46.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.509 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:46.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:46.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:24:46.509 00:24:46.509 --- 10.0.0.1 ping statistics --- 00:24:46.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.509 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:46.509 13:42:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:46.772 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=91101 00:24:46.772 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:46.772 13:42:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 91101 00:24:46.772 13:42:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 91101 ']' 00:24:46.772 13:42:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.772 13:42:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:46.772 13:42:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.772 13:42:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:46.772 13:42:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:46.772 [2024-05-15 13:42:59.662774] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:24:46.772 [2024-05-15 13:42:59.662891] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.772 [2024-05-15 13:42:59.796297] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:46.772 [2024-05-15 13:42:59.815916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:47.030 [2024-05-15 13:42:59.872426] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.030 [2024-05-15 13:42:59.872493] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.030 [2024-05-15 13:42:59.872508] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.030 [2024-05-15 13:42:59.872520] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.030 [2024-05-15 13:42:59.872531] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.030 [2024-05-15 13:42:59.872746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.030 [2024-05-15 13:42:59.873722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:47.030 [2024-05-15 13:42:59.873726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.594 13:43:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:47.594 13:43:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:24:47.594 13:43:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:47.594 13:43:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:47.594 13:43:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:47.594 13:43:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.594 13:43:00 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:47.851 [2024-05-15 13:43:00.833802] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.851 13:43:00 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:48.107 Malloc0 00:24:48.107 13:43:01 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:48.365 13:43:01 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:48.622 13:43:01 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:48.927 [2024-05-15 13:43:01.905985] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:48.927 [2024-05-15 13:43:01.906314] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.927 13:43:01 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:49.183 [2024-05-15 13:43:02.194490] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:49.183 13:43:02 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:49.441 [2024-05-15 13:43:02.422677] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:49.441 13:43:02 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=91164 00:24:49.441 13:43:02 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:49.441 13:43:02 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 91164 /var/tmp/bdevperf.sock 00:24:49.441 13:43:02 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:49.441 13:43:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 91164 ']' 00:24:49.441 13:43:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:49.441 13:43:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:49.441 13:43:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:49.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:49.441 13:43:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:49.441 13:43:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:50.816 13:43:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:50.816 13:43:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:24:50.816 13:43:03 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:50.816 NVMe0n1 00:24:50.816 13:43:03 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:51.073 00:24:51.073 13:43:04 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=91188 00:24:51.073 13:43:04 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:51.073 13:43:04 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:52.447 13:43:05 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:52.447 13:43:05 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:55.728 13:43:08 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:56.035 00:24:56.035 13:43:08 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:56.294 13:43:09 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:59.574 13:43:12 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:59.574 [2024-05-15 13:43:12.405674] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:59.574 13:43:12 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:00.506 13:43:13 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:00.763 13:43:13 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 91188 00:25:07.387 0 00:25:07.387 13:43:19 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 91164 00:25:07.387 13:43:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 91164 ']' 00:25:07.387 13:43:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 91164 00:25:07.387 13:43:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:25:07.387 13:43:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:07.387 13:43:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91164 00:25:07.387 13:43:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:07.387 13:43:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:07.387 13:43:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91164' 00:25:07.387 killing process with pid 91164 00:25:07.387 13:43:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 91164 00:25:07.387 13:43:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 91164 00:25:07.387 13:43:19 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:07.387 [2024-05-15 13:43:02.490080] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:25:07.387 [2024-05-15 13:43:02.490193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91164 ] 00:25:07.387 [2024-05-15 13:43:02.610924] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:07.387 [2024-05-15 13:43:02.622121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.387 [2024-05-15 13:43:02.676434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.387 Running I/O for 15 seconds... 00:25:07.387 [2024-05-15 13:43:05.456453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.387 [2024-05-15 13:43:05.456529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.456556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.387 [2024-05-15 13:43:05.456571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.456588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.387 [2024-05-15 13:43:05.456602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.456618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.387 [2024-05-15 13:43:05.456633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.456648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.387 [2024-05-15 13:43:05.456662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.456678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.387 [2024-05-15 13:43:05.456692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.456707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.387 [2024-05-15 13:43:05.456721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.456737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.387 [2024-05-15 13:43:05.456751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.456767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:89216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-05-15 13:43:05.456780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.456796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-05-15 13:43:05.456810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.456825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-05-15 13:43:05.456859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.456875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-05-15 13:43:05.456889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.456905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-05-15 13:43:05.456937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.456953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-05-15 13:43:05.456968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.456985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-05-15 13:43:05.457000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.457017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-05-15 13:43:05.457032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.457049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:89280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-05-15 13:43:05.457064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.457084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-05-15 13:43:05.457099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.457116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-05-15 13:43:05.457131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.457147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:89304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-05-15 13:43:05.457162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.457179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-05-15 13:43:05.457193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.457210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-05-15 13:43:05.457224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.457241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-05-15 13:43:05.457265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.457289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-05-15 13:43:05.457304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.457321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-05-15 13:43:05.457335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.387 [2024-05-15 13:43:05.457352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.388 [2024-05-15 13:43:05.457367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.457383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:89360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.388 [2024-05-15 13:43:05.457398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.457415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.388 [2024-05-15 13:43:05.457430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.457448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.388 [2024-05-15 13:43:05.457463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.457480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:89384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.388 [2024-05-15 13:43:05.457495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.457521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.388 [2024-05-15 13:43:05.457536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.457553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.388 [2024-05-15 13:43:05.457568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.457584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.388 [2024-05-15 13:43:05.457600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.457619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.388 [2024-05-15 13:43:05.457634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.457651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.388 [2024-05-15 13:43:05.457666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.457682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.388 [2024-05-15 13:43:05.457710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.457727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.388 [2024-05-15 13:43:05.457742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.457759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.388 [2024-05-15 13:43:05.457774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.457790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.388 [2024-05-15 13:43:05.457806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.457822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.388 [2024-05-15 13:43:05.457838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.457854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.388 [2024-05-15 13:43:05.457870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.457886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.388 [2024-05-15 13:43:05.457902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.457919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.388 [2024-05-15 13:43:05.457934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.457950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.388 [2024-05-15 13:43:05.457967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.457984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.388 [2024-05-15 13:43:05.457999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.458016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.388 [2024-05-15 13:43:05.458031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.458048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.388 [2024-05-15 13:43:05.458063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.458080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.388 [2024-05-15 13:43:05.458095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.458112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.388 [2024-05-15 13:43:05.458132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.458152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.388 [2024-05-15 13:43:05.458167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.458183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.388 [2024-05-15 13:43:05.458198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.458215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.388 [2024-05-15 13:43:05.458230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.458255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:89440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.388 [2024-05-15 13:43:05.458269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.458286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.388 [2024-05-15 13:43:05.458302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.458318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:89456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.388 [2024-05-15 13:43:05.458334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.458351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.388 [2024-05-15 13:43:05.458366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.458382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.388 [2024-05-15 13:43:05.458397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.388 [2024-05-15 13:43:05.458414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.388 [2024-05-15 13:43:05.458429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.458446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:89488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-05-15 13:43:05.458461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.458477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-05-15 13:43:05.458492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.458509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:89504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-05-15 13:43:05.458524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.458546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:89512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-05-15 13:43:05.458561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.458578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:89520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-05-15 13:43:05.458593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.458609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:89528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-05-15 13:43:05.458625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.458652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.389 [2024-05-15 13:43:05.458666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.458683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.389 [2024-05-15 13:43:05.458697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.458713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.389 [2024-05-15 13:43:05.458727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.458742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.389 [2024-05-15 13:43:05.458756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.458772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.389 [2024-05-15 13:43:05.458786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.458801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.389 [2024-05-15 13:43:05.458816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.458831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.389 [2024-05-15 13:43:05.458845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.458861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.389 [2024-05-15 13:43:05.458876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.458892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-05-15 13:43:05.458906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.458921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-05-15 13:43:05.458961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.458978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:89552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-05-15 13:43:05.458993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.459010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-05-15 13:43:05.459024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.459041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-05-15 13:43:05.459057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.459074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-05-15 13:43:05.459089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.459106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-05-15 13:43:05.459121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.459138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-05-15 13:43:05.459153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.459170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.389 [2024-05-15 13:43:05.459185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.459204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.389 [2024-05-15 13:43:05.459219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.459236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.389 [2024-05-15 13:43:05.459251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.459276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.389 [2024-05-15 13:43:05.459292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.459309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.389 [2024-05-15 13:43:05.459324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.459341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.389 [2024-05-15 13:43:05.459356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.459380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.389 [2024-05-15 13:43:05.459395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.459412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.389 [2024-05-15 13:43:05.459428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.459444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.389 [2024-05-15 13:43:05.459459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.459476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.389 [2024-05-15 13:43:05.459491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.459507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.389 [2024-05-15 13:43:05.459522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.459539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.389 [2024-05-15 13:43:05.459554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.459571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.389 [2024-05-15 13:43:05.459587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.389 [2024-05-15 13:43:05.459603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.390 [2024-05-15 13:43:05.459618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.459635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.390 [2024-05-15 13:43:05.459650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.459666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.390 [2024-05-15 13:43:05.459682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.459698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.390 [2024-05-15 13:43:05.459714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.459732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.390 [2024-05-15 13:43:05.459747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.459764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.390 [2024-05-15 13:43:05.459779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.459801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.390 [2024-05-15 13:43:05.459817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.459836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.390 [2024-05-15 13:43:05.459851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.459868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.390 [2024-05-15 13:43:05.459884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.459900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.390 [2024-05-15 13:43:05.459915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.459931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.390 [2024-05-15 13:43:05.459946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.459963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-05-15 13:43:05.459978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.459994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:89608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-05-15 13:43:05.460009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.460026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-05-15 13:43:05.460041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.460058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:89624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-05-15 13:43:05.460073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.460090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-05-15 13:43:05.460105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.460122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:89640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-05-15 13:43:05.460148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.460163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:89648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-05-15 13:43:05.460177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.460192] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4cfd0 is same with the state(5) to be set 00:25:07.390 [2024-05-15 13:43:05.460215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.390 [2024-05-15 13:43:05.460226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.390 [2024-05-15 13:43:05.460237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89656 len:8 PRP1 0x0 PRP2 0x0 00:25:07.390 [2024-05-15 13:43:05.460261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.460276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.390 [2024-05-15 13:43:05.460286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.390 [2024-05-15 13:43:05.460297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:8 PRP1 0x0 PRP2 0x0 00:25:07.390 [2024-05-15 13:43:05.460311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.460325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.390 [2024-05-15 13:43:05.460337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.390 [2024-05-15 13:43:05.460348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90120 len:8 PRP1 0x0 PRP2 0x0 00:25:07.390 [2024-05-15 13:43:05.460361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.460376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.390 [2024-05-15 13:43:05.460386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.390 [2024-05-15 13:43:05.460396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90128 len:8 PRP1 0x0 PRP2 0x0 00:25:07.390 [2024-05-15 13:43:05.460410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.460424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.390 [2024-05-15 13:43:05.460435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.390 [2024-05-15 13:43:05.460445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90136 len:8 PRP1 0x0 PRP2 0x0 00:25:07.390 [2024-05-15 13:43:05.460459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.460474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.390 [2024-05-15 13:43:05.460484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.390 [2024-05-15 13:43:05.460495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90144 len:8 PRP1 0x0 PRP2 0x0 00:25:07.390 [2024-05-15 13:43:05.460509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.460523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.390 [2024-05-15 13:43:05.460533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.390 [2024-05-15 13:43:05.460544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90152 len:8 PRP1 0x0 PRP2 0x0 00:25:07.390 [2024-05-15 13:43:05.460558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.460572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.390 [2024-05-15 13:43:05.460582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.390 [2024-05-15 13:43:05.460592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90160 len:8 PRP1 0x0 PRP2 0x0 00:25:07.390 [2024-05-15 13:43:05.460612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.460627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.390 [2024-05-15 13:43:05.460637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.390 [2024-05-15 13:43:05.460647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90168 len:8 PRP1 0x0 PRP2 0x0 00:25:07.390 [2024-05-15 13:43:05.460664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.390 [2024-05-15 13:43:05.460679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.390 [2024-05-15 13:43:05.460689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.391 [2024-05-15 13:43:05.460699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90176 len:8 PRP1 0x0 PRP2 0x0 00:25:07.391 [2024-05-15 13:43:05.460713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:05.460727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.391 [2024-05-15 13:43:05.460739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.391 [2024-05-15 13:43:05.460750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90184 len:8 PRP1 0x0 PRP2 0x0 00:25:07.391 [2024-05-15 13:43:05.460763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:05.460777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.391 [2024-05-15 13:43:05.460788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.391 [2024-05-15 13:43:05.460798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90192 len:8 PRP1 0x0 PRP2 0x0 00:25:07.391 [2024-05-15 13:43:05.460812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:05.460826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.391 [2024-05-15 13:43:05.460836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.391 [2024-05-15 13:43:05.460847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90200 len:8 PRP1 0x0 PRP2 0x0 00:25:07.391 [2024-05-15 13:43:05.460861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:05.460874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.391 [2024-05-15 13:43:05.460885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.391 [2024-05-15 13:43:05.460896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90208 len:8 PRP1 0x0 PRP2 0x0 00:25:07.391 [2024-05-15 13:43:05.460910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:05.460941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.391 [2024-05-15 13:43:05.460952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.391 [2024-05-15 13:43:05.460963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90216 len:8 PRP1 0x0 PRP2 0x0 00:25:07.391 [2024-05-15 13:43:05.460977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:05.460992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.391 [2024-05-15 13:43:05.461003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.391 [2024-05-15 13:43:05.461020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90224 len:8 PRP1 0x0 PRP2 0x0 00:25:07.391 [2024-05-15 13:43:05.461034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:05.461049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.391 [2024-05-15 13:43:05.461060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.391 [2024-05-15 13:43:05.461071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90232 len:8 PRP1 0x0 PRP2 0x0 00:25:07.391 [2024-05-15 13:43:05.461088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:05.461144] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf4cfd0 was disconnected and freed. reset controller. 00:25:07.391 [2024-05-15 13:43:05.461163] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:07.391 [2024-05-15 13:43:05.461225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.391 [2024-05-15 13:43:05.461243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:05.461267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.391 [2024-05-15 13:43:05.461285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:05.461301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.391 [2024-05-15 13:43:05.461316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:05.461331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.391 [2024-05-15 13:43:05.461346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:05.461362] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:07.391 [2024-05-15 13:43:05.464751] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:07.391 [2024-05-15 13:43:05.464811] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec9db0 (9): Bad file descriptor 00:25:07.391 [2024-05-15 13:43:05.499119] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:07.391 [2024-05-15 13:43:09.129943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:127384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.391 [2024-05-15 13:43:09.130022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:09.130049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.391 [2024-05-15 13:43:09.130081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:09.130099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:127400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.391 [2024-05-15 13:43:09.130114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:09.130130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:127408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.391 [2024-05-15 13:43:09.130163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:09.130180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:127416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.391 [2024-05-15 13:43:09.130195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:09.130211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:127424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.391 [2024-05-15 13:43:09.130226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:09.130242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:127432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.391 [2024-05-15 13:43:09.130270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:09.130287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:127440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.391 [2024-05-15 13:43:09.130302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:09.130319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.391 [2024-05-15 13:43:09.130333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:09.130350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:127456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.391 [2024-05-15 13:43:09.130365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:09.130381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:127464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.391 [2024-05-15 13:43:09.130396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:09.130413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.391 [2024-05-15 13:43:09.130427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:09.130444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.391 [2024-05-15 13:43:09.130459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:09.130475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:127488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.391 [2024-05-15 13:43:09.130489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:09.130506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:127496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.391 [2024-05-15 13:43:09.130521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.391 [2024-05-15 13:43:09.130537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.391 [2024-05-15 13:43:09.130552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.130577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.392 [2024-05-15 13:43:09.130592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.130611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.392 [2024-05-15 13:43:09.130626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.130643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.392 [2024-05-15 13:43:09.130658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.130675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.392 [2024-05-15 13:43:09.130690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.130706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.392 [2024-05-15 13:43:09.130721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.130737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.392 [2024-05-15 13:43:09.130752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.130769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.392 [2024-05-15 13:43:09.130784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.130800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:127504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.392 [2024-05-15 13:43:09.130815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.130831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.392 [2024-05-15 13:43:09.130846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.130863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.392 [2024-05-15 13:43:09.130878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.130894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:127528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.392 [2024-05-15 13:43:09.130909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.130926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:127536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.392 [2024-05-15 13:43:09.130953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.130986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.392 [2024-05-15 13:43:09.131008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.131025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:127552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.392 [2024-05-15 13:43:09.131041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.131058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:127560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.392 [2024-05-15 13:43:09.131074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.131091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:127568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.392 [2024-05-15 13:43:09.131107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.131124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:127576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.392 [2024-05-15 13:43:09.131139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.131157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:127584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.392 [2024-05-15 13:43:09.131173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.131190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:127592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.392 [2024-05-15 13:43:09.131206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.131223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:127600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.392 [2024-05-15 13:43:09.131239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.131264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:127608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.392 [2024-05-15 13:43:09.131291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.131309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:127616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.392 [2024-05-15 13:43:09.131324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.131342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:127624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.392 [2024-05-15 13:43:09.131357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.131374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.392 [2024-05-15 13:43:09.131390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.131418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.392 [2024-05-15 13:43:09.131433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.131450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.392 [2024-05-15 13:43:09.131471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.131488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.392 [2024-05-15 13:43:09.131503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.392 [2024-05-15 13:43:09.131520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.131535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.131552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.131567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.131583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.131598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.131615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.131629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.131646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:127000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.131661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.131678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:127008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.131692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.131709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.131725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.131741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.131756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.131773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.131788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.131804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:127040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.131819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.131836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:127048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.131851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.131873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:127056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.131888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.131905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:127064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.131919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.131936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.131951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.131968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:127080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.131982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.132000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:127088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.132015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.132032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.132047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.132064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.132079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.132096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.132111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.132128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.132143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.132159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:127632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.393 [2024-05-15 13:43:09.132174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.132191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.393 [2024-05-15 13:43:09.132206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.132223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:127648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.393 [2024-05-15 13:43:09.132238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.132261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:127656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.393 [2024-05-15 13:43:09.132283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.132300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:127664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.393 [2024-05-15 13:43:09.132315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.132332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:127672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.393 [2024-05-15 13:43:09.132347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.132368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:127680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.393 [2024-05-15 13:43:09.132384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.132401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:127688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.393 [2024-05-15 13:43:09.132416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.132432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:127696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.393 [2024-05-15 13:43:09.132448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.132465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.132480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.132496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.132512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.132529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.132544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.132561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.132577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.132594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:127160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.132609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.132626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.393 [2024-05-15 13:43:09.132641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.393 [2024-05-15 13:43:09.132657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.394 [2024-05-15 13:43:09.132672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.132694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.394 [2024-05-15 13:43:09.132709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.132726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:127192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.394 [2024-05-15 13:43:09.132741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.132759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:127200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.394 [2024-05-15 13:43:09.132774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.132791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.394 [2024-05-15 13:43:09.132806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.132822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:127216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.394 [2024-05-15 13:43:09.132838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.132858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.394 [2024-05-15 13:43:09.132873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.132890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:127232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.394 [2024-05-15 13:43:09.132905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.132922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.394 [2024-05-15 13:43:09.132937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.132953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.394 [2024-05-15 13:43:09.132968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.132985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.394 [2024-05-15 13:43:09.133000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.133016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:127712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.394 [2024-05-15 13:43:09.133031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.133048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.394 [2024-05-15 13:43:09.133074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.133090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:127728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.394 [2024-05-15 13:43:09.133110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.133126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.394 [2024-05-15 13:43:09.133140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.133156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.394 [2024-05-15 13:43:09.133169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.133185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.394 [2024-05-15 13:43:09.133199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.133214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:127760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.394 [2024-05-15 13:43:09.133228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.133244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:127768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.394 [2024-05-15 13:43:09.133265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.133284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:127776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.394 [2024-05-15 13:43:09.133299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.133315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.394 [2024-05-15 13:43:09.133329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.133345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.394 [2024-05-15 13:43:09.133358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.133376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:127800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.394 [2024-05-15 13:43:09.133389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.133405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.394 [2024-05-15 13:43:09.133419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.133435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:127816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.394 [2024-05-15 13:43:09.133449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.133465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:127824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.394 [2024-05-15 13:43:09.133478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.133500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:127256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.394 [2024-05-15 13:43:09.133524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.133540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:127264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.394 [2024-05-15 13:43:09.133572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.133589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:127272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.394 [2024-05-15 13:43:09.133604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.133621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:127280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.394 [2024-05-15 13:43:09.133635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.133652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.394 [2024-05-15 13:43:09.133667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.133684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:127296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.394 [2024-05-15 13:43:09.133699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.133716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.394 [2024-05-15 13:43:09.133731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.394 [2024-05-15 13:43:09.133747] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf22c50 is same with the state(5) to be set 00:25:07.394 [2024-05-15 13:43:09.133766] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.394 [2024-05-15 13:43:09.133777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.395 [2024-05-15 13:43:09.133788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127312 len:8 PRP1 0x0 PRP2 0x0 00:25:07.395 [2024-05-15 13:43:09.133805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.395 [2024-05-15 13:43:09.133821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.395 [2024-05-15 13:43:09.133832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.395 [2024-05-15 13:43:09.133843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127832 len:8 PRP1 0x0 PRP2 0x0 00:25:07.395 [2024-05-15 13:43:09.133859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.395 [2024-05-15 13:43:09.133874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.395 [2024-05-15 13:43:09.133887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.395 [2024-05-15 13:43:09.133898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127840 len:8 PRP1 0x0 PRP2 0x0 00:25:07.395 [2024-05-15 13:43:09.133913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.395 [2024-05-15 13:43:09.133929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.395 [2024-05-15 13:43:09.133945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.395 [2024-05-15 13:43:09.134035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127848 len:8 PRP1 0x0 PRP2 0x0 00:25:07.395 [2024-05-15 13:43:09.134051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.395 [2024-05-15 13:43:09.134066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.395 [2024-05-15 13:43:09.134077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.395 [2024-05-15 13:43:09.134088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127856 len:8 PRP1 0x0 PRP2 0x0 00:25:07.395 [2024-05-15 13:43:09.134104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.395 [2024-05-15 13:43:09.134119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.395 [2024-05-15 13:43:09.134130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.395 [2024-05-15 13:43:09.134142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127864 len:8 PRP1 0x0 PRP2 0x0 00:25:07.395 [2024-05-15 13:43:09.134157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.395 [2024-05-15 13:43:09.134172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.395 [2024-05-15 13:43:09.134183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.395 [2024-05-15 13:43:09.134194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127872 len:8 PRP1 0x0 PRP2 0x0 00:25:07.395 [2024-05-15 13:43:09.134209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.395 [2024-05-15 13:43:09.134224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.395 [2024-05-15 13:43:09.134235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.395 [2024-05-15 13:43:09.134247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127880 len:8 PRP1 0x0 PRP2 0x0 00:25:07.395 [2024-05-15 13:43:09.134271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.395 [2024-05-15 13:43:09.134287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.395 [2024-05-15 13:43:09.134298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.395 [2024-05-15 13:43:09.134309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127888 len:8 PRP1 0x0 PRP2 0x0 00:25:07.395 [2024-05-15 13:43:09.134325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.395 [2024-05-15 13:43:09.134340] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.395 [2024-05-15 13:43:09.134351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.395 [2024-05-15 13:43:09.134363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127320 len:8 PRP1 0x0 PRP2 0x0 00:25:07.395 [2024-05-15 13:43:09.134378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.395 [2024-05-15 13:43:09.134393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.395 [2024-05-15 13:43:09.134405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.395 [2024-05-15 13:43:09.134417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127328 len:8 PRP1 0x0 PRP2 0x0 00:25:07.395 [2024-05-15 13:43:09.134431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.395 [2024-05-15 13:43:09.134454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.395 [2024-05-15 13:43:09.134465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.395 [2024-05-15 13:43:09.134476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127336 len:8 PRP1 0x0 PRP2 0x0 00:25:07.395 [2024-05-15 13:43:09.134490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.395 [2024-05-15 13:43:09.134506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.395 [2024-05-15 13:43:09.134517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.395 [2024-05-15 13:43:09.134528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127344 len:8 PRP1 0x0 PRP2 0x0 00:25:07.395 [2024-05-15 13:43:09.134543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.395 [2024-05-15 13:43:09.134558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.395 [2024-05-15 13:43:09.134569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.395 [2024-05-15 13:43:09.134580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127352 len:8 PRP1 0x0 PRP2 0x0 00:25:07.395 [2024-05-15 13:43:09.134595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.395 [2024-05-15 13:43:09.134610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.395 [2024-05-15 13:43:09.134621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.395 [2024-05-15 13:43:09.134632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127360 len:8 PRP1 0x0 PRP2 0x0 00:25:07.395 [2024-05-15 13:43:09.134647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.395 [2024-05-15 13:43:09.134662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.395 [2024-05-15 13:43:09.134673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.395 [2024-05-15 13:43:09.134684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127368 len:8 PRP1 0x0 PRP2 0x0 00:25:07.395 [2024-05-15 13:43:09.134698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.395 [2024-05-15 13:43:09.134714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.395 [2024-05-15 13:43:09.134724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.395 [2024-05-15 13:43:09.134735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127376 len:8 PRP1 0x0 PRP2 0x0 00:25:07.395 [2024-05-15 13:43:09.134753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.395 [2024-05-15 13:43:09.134814] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf22c50 was disconnected and freed. reset controller. 00:25:07.395 [2024-05-15 13:43:09.134833] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:07.395 [2024-05-15 13:43:09.134895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.395 [2024-05-15 13:43:09.134914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.395 [2024-05-15 13:43:09.134930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.395 [2024-05-15 13:43:09.134947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.395 [2024-05-15 13:43:09.134980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.395 [2024-05-15 13:43:09.134994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:09.135010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.396 [2024-05-15 13:43:09.135024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:09.135038] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:07.396 [2024-05-15 13:43:09.138290] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:07.396 [2024-05-15 13:43:09.138340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec9db0 (9): Bad file descriptor 00:25:07.396 [2024-05-15 13:43:09.169780] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:07.396 [2024-05-15 13:43:13.677395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.396 [2024-05-15 13:43:13.677470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.677487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.396 [2024-05-15 13:43:13.677502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.677516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.396 [2024-05-15 13:43:13.677541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.677572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.396 [2024-05-15 13:43:13.677587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.677603] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec9db0 is same with the state(5) to be set 00:25:07.396 [2024-05-15 13:43:13.678369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.396 [2024-05-15 13:43:13.678398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.678421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.396 [2024-05-15 13:43:13.678437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.678454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.396 [2024-05-15 13:43:13.678469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.678487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.396 [2024-05-15 13:43:13.678502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.678518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.396 [2024-05-15 13:43:13.678552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.678569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.396 [2024-05-15 13:43:13.678584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.678601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.396 [2024-05-15 13:43:13.678616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.678632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.396 [2024-05-15 13:43:13.678647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.678664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.396 [2024-05-15 13:43:13.678679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.678696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.396 [2024-05-15 13:43:13.678710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.678727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.396 [2024-05-15 13:43:13.678742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.678758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.396 [2024-05-15 13:43:13.678773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.678790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.396 [2024-05-15 13:43:13.678804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.678821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.396 [2024-05-15 13:43:13.678836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.678853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.396 [2024-05-15 13:43:13.678868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.678885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.396 [2024-05-15 13:43:13.678900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.678917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.396 [2024-05-15 13:43:13.678932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.678957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.396 [2024-05-15 13:43:13.678972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.678989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.396 [2024-05-15 13:43:13.679004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.679021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.396 [2024-05-15 13:43:13.679035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.679052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.396 [2024-05-15 13:43:13.679066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.679084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.396 [2024-05-15 13:43:13.679098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.679115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.396 [2024-05-15 13:43:13.679130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.679147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.396 [2024-05-15 13:43:13.679161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.679178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.396 [2024-05-15 13:43:13.679193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.679209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.396 [2024-05-15 13:43:13.679224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.396 [2024-05-15 13:43:13.679252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.397 [2024-05-15 13:43:13.679268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.679284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.397 [2024-05-15 13:43:13.679299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.679316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.397 [2024-05-15 13:43:13.679331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.679348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.397 [2024-05-15 13:43:13.679369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.679385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.397 [2024-05-15 13:43:13.679400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.679416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.397 [2024-05-15 13:43:13.679431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.679448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.397 [2024-05-15 13:43:13.679463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.679480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.397 [2024-05-15 13:43:13.679495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.679512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.397 [2024-05-15 13:43:13.679526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.679543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.397 [2024-05-15 13:43:13.679558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.679574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.397 [2024-05-15 13:43:13.679589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.679606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.397 [2024-05-15 13:43:13.679621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.679637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.397 [2024-05-15 13:43:13.679652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.679668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.397 [2024-05-15 13:43:13.679683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.679700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.397 [2024-05-15 13:43:13.679715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.679732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.397 [2024-05-15 13:43:13.679747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.679769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.397 [2024-05-15 13:43:13.679784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.679801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.397 [2024-05-15 13:43:13.679816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.679833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.397 [2024-05-15 13:43:13.679848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.679864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.397 [2024-05-15 13:43:13.679879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.679895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.397 [2024-05-15 13:43:13.679910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.679927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.397 [2024-05-15 13:43:13.679941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.679958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.397 [2024-05-15 13:43:13.679973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.679991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.397 [2024-05-15 13:43:13.680006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.680024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.397 [2024-05-15 13:43:13.680038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.680055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.397 [2024-05-15 13:43:13.680070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.680087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.397 [2024-05-15 13:43:13.680103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.680119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.397 [2024-05-15 13:43:13.680134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.680151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.397 [2024-05-15 13:43:13.680165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.680187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.397 [2024-05-15 13:43:13.680202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.680219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.397 [2024-05-15 13:43:13.680233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.397 [2024-05-15 13:43:13.680259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.398 [2024-05-15 13:43:13.680273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.680290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.398 [2024-05-15 13:43:13.680305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.680322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.398 [2024-05-15 13:43:13.680337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.680354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.398 [2024-05-15 13:43:13.680369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.680385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.398 [2024-05-15 13:43:13.680400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.680417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.398 [2024-05-15 13:43:13.680432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.680449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.398 [2024-05-15 13:43:13.680463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.680480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.398 [2024-05-15 13:43:13.680495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.680512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.398 [2024-05-15 13:43:13.680527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.680544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.398 [2024-05-15 13:43:13.680559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.680575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.398 [2024-05-15 13:43:13.680596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.680612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.398 [2024-05-15 13:43:13.680627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.680643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.398 [2024-05-15 13:43:13.680658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.680675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.398 [2024-05-15 13:43:13.680690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.680707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.398 [2024-05-15 13:43:13.680721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.680738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.398 [2024-05-15 13:43:13.680753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.680769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.398 [2024-05-15 13:43:13.680784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.680800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.398 [2024-05-15 13:43:13.680815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.680833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.398 [2024-05-15 13:43:13.680848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.680865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.398 [2024-05-15 13:43:13.680880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.680896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.398 [2024-05-15 13:43:13.680911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.680927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.398 [2024-05-15 13:43:13.680942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.680959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.398 [2024-05-15 13:43:13.680974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.680996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.398 [2024-05-15 13:43:13.681011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.681028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.398 [2024-05-15 13:43:13.681043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.681060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.398 [2024-05-15 13:43:13.681075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.681091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.398 [2024-05-15 13:43:13.681106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.681123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.398 [2024-05-15 13:43:13.681138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.681155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.398 [2024-05-15 13:43:13.681170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.681186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.398 [2024-05-15 13:43:13.681201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.681217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.398 [2024-05-15 13:43:13.681232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.681259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.398 [2024-05-15 13:43:13.681275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.681291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.398 [2024-05-15 13:43:13.681307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.681323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.398 [2024-05-15 13:43:13.681338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.398 [2024-05-15 13:43:13.681354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.399 [2024-05-15 13:43:13.681369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.681386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.399 [2024-05-15 13:43:13.681407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.681424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.399 [2024-05-15 13:43:13.681439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.681455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.399 [2024-05-15 13:43:13.681470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.681487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.399 [2024-05-15 13:43:13.681502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.681518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.399 [2024-05-15 13:43:13.681544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.681565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.399 [2024-05-15 13:43:13.681580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.681597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.399 [2024-05-15 13:43:13.681612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.681629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.399 [2024-05-15 13:43:13.681644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.681663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.399 [2024-05-15 13:43:13.681678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.681695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.399 [2024-05-15 13:43:13.681710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.681727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.399 [2024-05-15 13:43:13.681742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.681758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:07.399 [2024-05-15 13:43:13.681773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.681790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.399 [2024-05-15 13:43:13.681805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.681828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.399 [2024-05-15 13:43:13.681843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.681860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.399 [2024-05-15 13:43:13.681875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.681891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.399 [2024-05-15 13:43:13.681906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.681923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.399 [2024-05-15 13:43:13.681938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.681955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.399 [2024-05-15 13:43:13.681969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.681986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.399 [2024-05-15 13:43:13.682001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.682017] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf22c50 is same with the state(5) to be set 00:25:07.399 [2024-05-15 13:43:13.682034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.399 [2024-05-15 13:43:13.682045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.399 [2024-05-15 13:43:13.682057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111120 len:8 PRP1 0x0 PRP2 0x0 00:25:07.399 [2024-05-15 13:43:13.682073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.682089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.399 [2024-05-15 13:43:13.682100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.399 [2024-05-15 13:43:13.682111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111696 len:8 PRP1 0x0 PRP2 0x0 00:25:07.399 [2024-05-15 13:43:13.682126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.682141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.399 [2024-05-15 13:43:13.682153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.399 [2024-05-15 13:43:13.682164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111704 len:8 PRP1 0x0 PRP2 0x0 00:25:07.399 [2024-05-15 13:43:13.682179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.682193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.399 [2024-05-15 13:43:13.682204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.399 [2024-05-15 13:43:13.682216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111712 len:8 PRP1 0x0 PRP2 0x0 00:25:07.399 [2024-05-15 13:43:13.682244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.682260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.399 [2024-05-15 13:43:13.682271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.399 [2024-05-15 13:43:13.682282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111720 len:8 PRP1 0x0 PRP2 0x0 00:25:07.399 [2024-05-15 13:43:13.682297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.682312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.399 [2024-05-15 13:43:13.682323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.399 [2024-05-15 13:43:13.682334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111728 len:8 PRP1 0x0 PRP2 0x0 00:25:07.399 [2024-05-15 13:43:13.682349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.682364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.399 [2024-05-15 13:43:13.682374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.399 [2024-05-15 13:43:13.682386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111736 len:8 PRP1 0x0 PRP2 0x0 00:25:07.399 [2024-05-15 13:43:13.682400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.682415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.399 [2024-05-15 13:43:13.682426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.399 [2024-05-15 13:43:13.682437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111744 len:8 PRP1 0x0 PRP2 0x0 00:25:07.399 [2024-05-15 13:43:13.682452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.682467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.399 [2024-05-15 13:43:13.682477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.399 [2024-05-15 13:43:13.682489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111752 len:8 PRP1 0x0 PRP2 0x0 00:25:07.399 [2024-05-15 13:43:13.682505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.399 [2024-05-15 13:43:13.682520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.400 [2024-05-15 13:43:13.682531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.400 [2024-05-15 13:43:13.682542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111760 len:8 PRP1 0x0 PRP2 0x0 00:25:07.400 [2024-05-15 13:43:13.682557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.400 [2024-05-15 13:43:13.682572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.400 [2024-05-15 13:43:13.682584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.400 [2024-05-15 13:43:13.682595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111768 len:8 PRP1 0x0 PRP2 0x0 00:25:07.400 [2024-05-15 13:43:13.682610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.400 [2024-05-15 13:43:13.682625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.400 [2024-05-15 13:43:13.682636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.400 [2024-05-15 13:43:13.682655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111776 len:8 PRP1 0x0 PRP2 0x0 00:25:07.400 [2024-05-15 13:43:13.682670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.400 [2024-05-15 13:43:13.682685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.400 [2024-05-15 13:43:13.682696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.400 [2024-05-15 13:43:13.682707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111784 len:8 PRP1 0x0 PRP2 0x0 00:25:07.400 [2024-05-15 13:43:13.682723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.400 [2024-05-15 13:43:13.682737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.400 [2024-05-15 13:43:13.682748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.400 [2024-05-15 13:43:13.682759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111792 len:8 PRP1 0x0 PRP2 0x0 00:25:07.400 [2024-05-15 13:43:13.682774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.400 [2024-05-15 13:43:13.682788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.400 [2024-05-15 13:43:13.682799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.400 [2024-05-15 13:43:13.682810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111800 len:8 PRP1 0x0 PRP2 0x0 00:25:07.400 [2024-05-15 13:43:13.682825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.400 [2024-05-15 13:43:13.682840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.400 [2024-05-15 13:43:13.682851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.400 [2024-05-15 13:43:13.682863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111808 len:8 PRP1 0x0 PRP2 0x0 00:25:07.400 [2024-05-15 13:43:13.682877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.400 [2024-05-15 13:43:13.682892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:07.400 [2024-05-15 13:43:13.682903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:07.400 [2024-05-15 13:43:13.682914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111816 len:8 PRP1 0x0 PRP2 0x0 00:25:07.400 [2024-05-15 13:43:13.682930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.400 [2024-05-15 13:43:13.682988] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf22c50 was disconnected and freed. reset controller. 00:25:07.400 [2024-05-15 13:43:13.683007] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:07.400 [2024-05-15 13:43:13.683023] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:07.400 [2024-05-15 13:43:13.686387] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:07.400 [2024-05-15 13:43:13.686434] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec9db0 (9): Bad file descriptor 00:25:07.400 [2024-05-15 13:43:13.720597] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:07.400 00:25:07.400 Latency(us) 00:25:07.400 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.400 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:07.400 Verification LBA range: start 0x0 length 0x4000 00:25:07.400 NVMe0n1 : 15.01 9807.93 38.31 247.66 0.00 12700.75 530.53 15478.98 00:25:07.400 =================================================================================================================== 00:25:07.400 Total : 9807.93 38.31 247.66 0.00 12700.75 530.53 15478.98 00:25:07.400 Received shutdown signal, test time was about 15.000000 seconds 00:25:07.400 00:25:07.400 Latency(us) 00:25:07.400 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.400 =================================================================================================================== 00:25:07.400 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:07.400 13:43:19 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:07.400 13:43:19 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:07.400 13:43:19 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:07.400 13:43:19 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=91360 00:25:07.400 13:43:19 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:07.400 13:43:19 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 91360 /var/tmp/bdevperf.sock 00:25:07.400 13:43:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 91360 ']' 00:25:07.400 13:43:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:07.400 13:43:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:07.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:07.400 13:43:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:07.400 13:43:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:07.400 13:43:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:07.400 13:43:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:07.400 13:43:19 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:25:07.400 13:43:19 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:07.400 [2024-05-15 13:43:20.081943] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:07.400 13:43:20 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:07.400 [2024-05-15 13:43:20.358221] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:07.400 13:43:20 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:07.659 NVMe0n1 00:25:07.659 13:43:20 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:08.224 00:25:08.224 13:43:21 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:08.224 00:25:08.482 13:43:21 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:08.482 13:43:21 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:08.739 13:43:21 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:08.997 13:43:21 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:12.279 13:43:24 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:12.279 13:43:24 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:12.279 13:43:25 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=91429 00:25:12.279 13:43:25 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 91429 00:25:12.279 13:43:25 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:13.653 0 00:25:13.653 13:43:26 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:13.653 [2024-05-15 13:43:19.528552] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:25:13.653 [2024-05-15 13:43:19.528671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91360 ] 00:25:13.653 [2024-05-15 13:43:19.656952] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:13.653 [2024-05-15 13:43:19.675356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.653 [2024-05-15 13:43:19.726593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.653 [2024-05-15 13:43:21.967460] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:13.653 [2024-05-15 13:43:21.967587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.653 [2024-05-15 13:43:21.967610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.653 [2024-05-15 13:43:21.967629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.653 [2024-05-15 13:43:21.967644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.653 [2024-05-15 13:43:21.967660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.653 [2024-05-15 13:43:21.967675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.653 [2024-05-15 13:43:21.967691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:13.653 [2024-05-15 13:43:21.967706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:13.653 [2024-05-15 13:43:21.967720] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:13.653 [2024-05-15 13:43:21.967769] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:13.653 [2024-05-15 13:43:21.967797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ebdb0 (9): Bad file descriptor 00:25:13.653 [2024-05-15 13:43:21.976908] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:13.653 Running I/O for 1 seconds... 00:25:13.653 00:25:13.653 Latency(us) 00:25:13.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.653 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:13.653 Verification LBA range: start 0x0 length 0x4000 00:25:13.653 NVMe0n1 : 1.01 8513.29 33.26 0.00 0.00 14945.28 1622.80 16727.28 00:25:13.653 =================================================================================================================== 00:25:13.653 Total : 8513.29 33.26 0.00 0.00 14945.28 1622.80 16727.28 00:25:13.653 13:43:26 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:13.653 13:43:26 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:13.653 13:43:26 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:13.912 13:43:26 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:13.912 13:43:26 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:14.170 13:43:27 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:14.737 13:43:27 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:18.019 13:43:30 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:18.019 13:43:30 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:18.019 13:43:30 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 91360 00:25:18.019 13:43:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 91360 ']' 00:25:18.019 13:43:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 91360 00:25:18.019 13:43:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:25:18.019 13:43:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:18.019 13:43:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91360 00:25:18.019 killing process with pid 91360 00:25:18.019 13:43:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:18.019 13:43:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:18.019 13:43:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91360' 00:25:18.019 13:43:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 91360 00:25:18.019 13:43:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 91360 00:25:18.019 13:43:31 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:18.019 13:43:31 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:18.276 13:43:31 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:18.276 13:43:31 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:18.276 13:43:31 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:18.276 13:43:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:18.276 13:43:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:18.276 13:43:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:18.276 13:43:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:18.276 13:43:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:18.276 13:43:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:18.276 rmmod nvme_tcp 00:25:18.534 rmmod nvme_fabrics 00:25:18.534 13:43:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:18.534 13:43:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:18.534 13:43:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:18.534 13:43:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 91101 ']' 00:25:18.534 13:43:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 91101 00:25:18.534 13:43:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 91101 ']' 00:25:18.534 13:43:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 91101 00:25:18.534 13:43:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:25:18.534 13:43:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:18.534 13:43:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91101 00:25:18.534 killing process with pid 91101 00:25:18.534 13:43:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:18.534 13:43:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:18.534 13:43:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91101' 00:25:18.534 13:43:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 91101 00:25:18.534 [2024-05-15 13:43:31.431801] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:18.534 13:43:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 91101 00:25:18.792 13:43:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:18.792 13:43:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:18.792 13:43:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:18.792 13:43:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:18.792 13:43:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:18.792 13:43:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.792 13:43:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:18.792 13:43:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.792 13:43:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:18.792 00:25:18.792 real 0m32.602s 00:25:18.792 user 2m5.544s 00:25:18.792 sys 0m6.539s 00:25:18.792 13:43:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:18.792 13:43:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:18.792 ************************************ 00:25:18.793 END TEST nvmf_failover 00:25:18.793 ************************************ 00:25:18.793 13:43:31 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:18.793 13:43:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:18.793 13:43:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:18.793 13:43:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:18.793 ************************************ 00:25:18.793 START TEST nvmf_host_discovery 00:25:18.793 ************************************ 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:18.793 * Looking for test storage... 00:25:18.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:18.793 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:19.052 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:19.052 Cannot find device "nvmf_tgt_br" 00:25:19.052 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:25:19.052 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:19.052 Cannot find device "nvmf_tgt_br2" 00:25:19.052 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:25:19.052 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:19.052 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:19.052 Cannot find device "nvmf_tgt_br" 00:25:19.052 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:25:19.052 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:19.052 Cannot find device "nvmf_tgt_br2" 00:25:19.052 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:25:19.052 13:43:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:19.052 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:19.052 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:19.052 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:19.052 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:25:19.052 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:19.052 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:19.052 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:25:19.052 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:19.052 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:19.052 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:19.052 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:19.052 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:19.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:19.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:25:19.310 00:25:19.310 --- 10.0.0.2 ping statistics --- 00:25:19.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.310 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:19.310 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:19.310 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:25:19.310 00:25:19.310 --- 10.0.0.3 ping statistics --- 00:25:19.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.310 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:19.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:19.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:25:19.310 00:25:19.310 --- 10.0.0.1 ping statistics --- 00:25:19.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.310 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=91693 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 91693 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 91693 ']' 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:19.310 13:43:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:19.568 [2024-05-15 13:43:32.425232] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:25:19.568 [2024-05-15 13:43:32.425568] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:19.568 [2024-05-15 13:43:32.554078] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:19.568 [2024-05-15 13:43:32.576795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.568 [2024-05-15 13:43:32.636287] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:19.568 [2024-05-15 13:43:32.636566] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:19.568 [2024-05-15 13:43:32.636805] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:19.568 [2024-05-15 13:43:32.636956] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:19.568 [2024-05-15 13:43:32.637020] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:19.568 [2024-05-15 13:43:32.637149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.507 [2024-05-15 13:43:33.484753] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.507 [2024-05-15 13:43:33.492685] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:20.507 [2024-05-15 13:43:33.493011] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.507 null0 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.507 null1 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=91734 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 91734 /tmp/host.sock 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 91734 ']' 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:20.507 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:20.507 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:20.507 [2024-05-15 13:43:33.563354] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:25:20.507 [2024-05-15 13:43:33.563431] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91734 ] 00:25:20.768 [2024-05-15 13:43:33.682519] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:20.768 [2024-05-15 13:43:33.702763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.768 [2024-05-15 13:43:33.760222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.768 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:20.768 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:25:20.768 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:20.768 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:20.768 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.768 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.027 13:43:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:21.027 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:21.027 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.027 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:21.027 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:21.027 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:21.027 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.027 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.027 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:21.027 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.027 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:21.027 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.027 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:21.027 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:21.027 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.027 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.027 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.027 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:21.027 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:21.027 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.027 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.027 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:21.027 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:21.027 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:21.027 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.286 [2024-05-15 13:43:34.213030] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:21.286 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:21.287 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.287 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.287 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.287 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:21.287 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:21.287 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:21.287 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:21.287 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:21.287 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:21.287 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:21.287 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:21.287 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.287 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:21.287 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:21.287 13:43:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:21.287 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.545 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:25:21.545 13:43:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:25:21.803 [2024-05-15 13:43:34.887693] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:21.803 [2024-05-15 13:43:34.887733] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:21.803 [2024-05-15 13:43:34.887751] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:21.803 [2024-05-15 13:43:34.893721] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:22.062 [2024-05-15 13:43:34.949897] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:22.062 [2024-05-15 13:43:34.949939] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:22.320 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:22.320 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:22.320 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:22.320 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:22.320 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.320 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.320 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:22.320 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:22.320 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:22.320 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.579 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:22.580 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.840 [2024-05-15 13:43:35.729410] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:22.840 [2024-05-15 13:43:35.730467] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:22.840 [2024-05-15 13:43:35.730628] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:22.840 [2024-05-15 13:43:35.736456] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:22.840 [2024-05-15 13:43:35.800751] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:22.840 [2024-05-15 13:43:35.800912] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:22.840 [2024-05-15 13:43:35.801002] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:22.840 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.099 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:23.099 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:23.099 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:23.099 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:23.099 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:23.099 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.099 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.099 [2024-05-15 13:43:35.954782] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:23.099 [2024-05-15 13:43:35.954967] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:23.099 [2024-05-15 13:43:35.957109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.099 [2024-05-15 13:43:35.957287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.099 [2024-05-15 13:43:35.957464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.099 [2024-05-15 13:43:35.957617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.099 [2024-05-15 13:43:35.957716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.099 [2024-05-15 13:43:35.957773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.099 [2024-05-15 13:43:35.957868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:23.099 [2024-05-15 13:43:35.957922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.099 [2024-05-15 13:43:35.958006] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1790670 is same with the state(5) to be set 00:25:23.099 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.099 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:23.099 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:23.099 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:23.099 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:23.099 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:23.099 [2024-05-15 13:43:35.961151] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:23.099 [2024-05-15 13:43:35.961177] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:23.099 [2024-05-15 13:43:35.961231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1790670 (9): Bad file descriptor 00:25:23.099 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:23.099 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:23.099 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:23.099 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.099 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:23.099 13:43:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:23.099 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.099 13:43:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.099 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.099 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:23.099 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:23.099 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:23.099 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:23.099 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:23.099 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:23.099 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:23.099 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:23.099 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:23.099 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.099 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.099 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.099 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:23.099 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.099 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:23.099 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:23.099 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:23.099 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:23.099 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:23.100 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:23.359 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.359 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:25:23.359 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:23.359 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:23.359 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:23.359 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:23.359 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:23.359 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:23.359 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:23.359 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.359 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.360 13:43:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.399 [2024-05-15 13:43:37.368523] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:24.399 [2024-05-15 13:43:37.368741] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:24.399 [2024-05-15 13:43:37.368800] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:24.399 [2024-05-15 13:43:37.374556] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:24.399 [2024-05-15 13:43:37.434234] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:24.399 [2024-05-15 13:43:37.434564] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:24.400 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.400 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:24.400 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:24.400 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:24.400 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:24.400 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:24.400 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:24.400 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:24.400 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:24.400 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.400 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.400 request: 00:25:24.400 { 00:25:24.400 "name": "nvme", 00:25:24.400 "trtype": "tcp", 00:25:24.400 "traddr": "10.0.0.2", 00:25:24.400 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:24.400 "adrfam": "ipv4", 00:25:24.400 "trsvcid": "8009", 00:25:24.400 "wait_for_attach": true, 00:25:24.400 "method": "bdev_nvme_start_discovery", 00:25:24.400 "req_id": 1 00:25:24.400 } 00:25:24.400 Got JSON-RPC error response 00:25:24.400 response: 00:25:24.400 { 00:25:24.400 "code": -17, 00:25:24.400 "message": "File exists" 00:25:24.400 } 00:25:24.400 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:24.400 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:24.400 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:24.400 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:24.400 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:24.400 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:24.400 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:24.400 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:24.400 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:24.400 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.400 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.400 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:24.400 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.658 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:24.658 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:24.658 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:24.658 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.658 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:24.658 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.658 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.658 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:24.658 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.658 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:24.658 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:24.658 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.659 request: 00:25:24.659 { 00:25:24.659 "name": "nvme_second", 00:25:24.659 "trtype": "tcp", 00:25:24.659 "traddr": "10.0.0.2", 00:25:24.659 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:24.659 "adrfam": "ipv4", 00:25:24.659 "trsvcid": "8009", 00:25:24.659 "wait_for_attach": true, 00:25:24.659 "method": "bdev_nvme_start_discovery", 00:25:24.659 "req_id": 1 00:25:24.659 } 00:25:24.659 Got JSON-RPC error response 00:25:24.659 response: 00:25:24.659 { 00:25:24.659 "code": -17, 00:25:24.659 "message": "File exists" 00:25:24.659 } 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.659 13:43:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.032 [2024-05-15 13:43:38.732000] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:25:26.032 [2024-05-15 13:43:38.732318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:26.032 [2024-05-15 13:43:38.732403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:26.032 [2024-05-15 13:43:38.732503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17adb80 with addr=10.0.0.2, port=8010 00:25:26.032 [2024-05-15 13:43:38.732576] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:26.032 [2024-05-15 13:43:38.732656] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:26.032 [2024-05-15 13:43:38.732694] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:26.967 [2024-05-15 13:43:39.731985] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:25:26.967 [2024-05-15 13:43:39.732303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:26.967 [2024-05-15 13:43:39.732382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:26.967 [2024-05-15 13:43:39.732490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17adb80 with addr=10.0.0.2, port=8010 00:25:26.967 [2024-05-15 13:43:39.732652] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:26.967 [2024-05-15 13:43:39.732735] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:26.967 [2024-05-15 13:43:39.732770] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:27.902 [2024-05-15 13:43:40.731859] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:27.902 request: 00:25:27.902 { 00:25:27.902 "name": "nvme_second", 00:25:27.902 "trtype": "tcp", 00:25:27.902 "traddr": "10.0.0.2", 00:25:27.902 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:27.902 "adrfam": "ipv4", 00:25:27.902 "trsvcid": "8010", 00:25:27.902 "attach_timeout_ms": 3000, 00:25:27.902 "method": "bdev_nvme_start_discovery", 00:25:27.902 "req_id": 1 00:25:27.902 } 00:25:27.902 Got JSON-RPC error response 00:25:27.902 response: 00:25:27.902 { 00:25:27.902 "code": -110, 00:25:27.902 "message": "Connection timed out" 00:25:27.902 } 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 91734 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:27.902 rmmod nvme_tcp 00:25:27.902 rmmod nvme_fabrics 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 91693 ']' 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 91693 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 91693 ']' 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 91693 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91693 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91693' 00:25:27.902 killing process with pid 91693 00:25:27.902 13:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 91693 00:25:27.902 [2024-05-15 13:43:40.932530] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]addres 13:43:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 91693 00:25:27.902 s.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:28.161 13:43:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:28.161 13:43:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:28.161 13:43:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:28.161 13:43:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:28.161 13:43:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:28.161 13:43:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.161 13:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.161 13:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.161 13:43:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:28.161 00:25:28.161 real 0m9.425s 00:25:28.161 user 0m17.078s 00:25:28.161 sys 0m2.450s 00:25:28.161 ************************************ 00:25:28.161 END TEST nvmf_host_discovery 00:25:28.161 ************************************ 00:25:28.161 13:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:28.161 13:43:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.161 13:43:41 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:28.161 13:43:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:28.161 13:43:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:28.161 13:43:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:28.161 ************************************ 00:25:28.161 START TEST nvmf_host_multipath_status 00:25:28.161 ************************************ 00:25:28.161 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:28.419 * Looking for test storage... 00:25:28.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.419 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:28.420 Cannot find device "nvmf_tgt_br" 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:28.420 Cannot find device "nvmf_tgt_br2" 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:28.420 Cannot find device "nvmf_tgt_br" 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:28.420 Cannot find device "nvmf_tgt_br2" 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:28.420 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:28.683 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:28.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:28.683 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:25:28.683 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:28.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:28.683 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:25:28.683 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:28.683 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:28.683 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:28.683 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:28.683 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:28.683 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:28.683 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:28.683 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:28.683 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:28.683 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:28.683 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:28.683 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:28.683 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:28.683 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:28.684 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:28.684 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:28.684 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:28.684 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:28.684 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:28.684 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:28.684 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:28.684 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:28.684 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:28.684 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:28.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:25:28.684 00:25:28.684 --- 10.0.0.2 ping statistics --- 00:25:28.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.684 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:25:28.684 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:28.684 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:28.684 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:25:28.684 00:25:28.684 --- 10.0.0.3 ping statistics --- 00:25:28.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.684 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:25:28.684 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:28.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:25:28.684 00:25:28.684 --- 10.0.0.1 ping statistics --- 00:25:28.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.684 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:25:28.684 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.684 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:25:28.684 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:28.684 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.684 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:28.684 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:28.684 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.684 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:28.684 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:28.942 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:28.942 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:28.942 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:28.942 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:28.942 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=92171 00:25:28.942 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 92171 00:25:28.942 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:28.942 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 92171 ']' 00:25:28.942 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.942 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:28.942 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.942 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:28.942 13:43:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:28.942 [2024-05-15 13:43:41.860109] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:25:28.942 [2024-05-15 13:43:41.860215] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.942 [2024-05-15 13:43:41.988828] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:28.942 [2024-05-15 13:43:42.007417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:29.199 [2024-05-15 13:43:42.064358] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.199 [2024-05-15 13:43:42.064437] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.199 [2024-05-15 13:43:42.064453] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:29.199 [2024-05-15 13:43:42.064467] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:29.199 [2024-05-15 13:43:42.064478] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.199 [2024-05-15 13:43:42.064623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.199 [2024-05-15 13:43:42.064632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.766 13:43:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:29.766 13:43:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:25:29.766 13:43:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:29.766 13:43:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:29.766 13:43:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:29.766 13:43:42 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.766 13:43:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=92171 00:25:29.766 13:43:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:30.024 [2024-05-15 13:43:43.083658] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.024 13:43:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:30.591 Malloc0 00:25:30.591 13:43:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:30.849 13:43:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:31.108 13:43:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:31.108 [2024-05-15 13:43:44.154418] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:31.108 [2024-05-15 13:43:44.154693] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.108 13:43:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:31.366 [2024-05-15 13:43:44.438848] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:31.366 13:43:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=92221 00:25:31.366 13:43:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:31.366 13:43:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 92221 /var/tmp/bdevperf.sock 00:25:31.366 13:43:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 92221 ']' 00:25:31.366 13:43:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:31.366 13:43:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:31.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:31.366 13:43:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:31.366 13:43:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:31.366 13:43:44 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:31.366 13:43:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:32.740 13:43:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:32.740 13:43:45 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:25:32.740 13:43:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:32.741 13:43:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:32.999 Nvme0n1 00:25:32.999 13:43:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:33.570 Nvme0n1 00:25:33.570 13:43:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:33.570 13:43:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:35.532 13:43:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:35.532 13:43:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:35.790 13:43:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:36.048 13:43:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:36.979 13:43:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:36.979 13:43:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:36.979 13:43:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:36.979 13:43:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.236 13:43:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.236 13:43:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:37.236 13:43:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.236 13:43:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:37.494 13:43:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:37.494 13:43:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:37.494 13:43:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.494 13:43:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:37.752 13:43:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.752 13:43:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:37.752 13:43:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.752 13:43:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:38.011 13:43:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.011 13:43:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:38.011 13:43:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.011 13:43:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:38.269 13:43:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.269 13:43:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:38.269 13:43:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.269 13:43:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:38.527 13:43:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.527 13:43:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:38.527 13:43:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:38.785 13:43:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:39.041 13:43:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:39.972 13:43:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:39.972 13:43:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:39.972 13:43:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.972 13:43:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:40.229 13:43:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:40.229 13:43:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:40.230 13:43:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.230 13:43:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:40.485 13:43:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.485 13:43:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:40.485 13:43:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.485 13:43:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:40.741 13:43:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.741 13:43:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:40.741 13:43:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:40.741 13:43:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.998 13:43:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.998 13:43:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:41.256 13:43:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.256 13:43:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:41.256 13:43:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.256 13:43:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:41.256 13:43:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:41.256 13:43:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.513 13:43:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.513 13:43:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:41.513 13:43:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:41.770 13:43:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:42.038 13:43:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:42.973 13:43:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:42.973 13:43:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:42.973 13:43:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.973 13:43:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:43.232 13:43:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.232 13:43:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:43.232 13:43:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.232 13:43:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:43.490 13:43:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:43.490 13:43:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:43.490 13:43:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.490 13:43:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:43.748 13:43:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.748 13:43:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:43.748 13:43:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.748 13:43:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:44.314 13:43:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.314 13:43:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:44.314 13:43:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.314 13:43:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:44.648 13:43:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.648 13:43:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:44.648 13:43:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.648 13:43:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:44.906 13:43:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.906 13:43:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:44.906 13:43:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:45.163 13:43:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:45.421 13:43:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:46.356 13:43:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:46.356 13:43:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:46.356 13:43:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.356 13:43:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:46.614 13:43:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:46.614 13:43:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:46.614 13:43:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.614 13:43:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:46.872 13:43:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:46.872 13:43:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:46.872 13:43:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:46.872 13:43:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.130 13:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.130 13:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:47.130 13:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:47.130 13:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.395 13:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.395 13:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:47.395 13:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:47.395 13:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.653 13:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.653 13:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:47.653 13:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.653 13:44:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:48.220 13:44:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:48.220 13:44:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:48.220 13:44:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:48.478 13:44:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:48.736 13:44:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:49.670 13:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:49.670 13:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:49.670 13:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:49.670 13:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.927 13:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:49.927 13:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:49.927 13:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.927 13:44:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:50.183 13:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:50.183 13:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:50.183 13:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.183 13:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:50.747 13:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.747 13:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:50.747 13:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:50.747 13:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.005 13:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:51.005 13:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:51.005 13:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.005 13:44:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:51.260 13:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:51.260 13:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:51.260 13:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:51.260 13:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:51.520 13:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:51.520 13:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:51.520 13:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:51.827 13:44:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:52.120 13:44:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:53.064 13:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:53.064 13:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:53.064 13:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.064 13:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:53.337 13:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:53.337 13:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:53.337 13:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.337 13:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:53.595 13:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.595 13:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:53.595 13:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.595 13:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:54.199 13:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.199 13:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:54.199 13:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.199 13:44:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:54.199 13:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.199 13:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:54.199 13:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:54.199 13:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.478 13:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:54.478 13:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:54.478 13:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.478 13:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:54.746 13:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.746 13:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:55.004 13:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:55.004 13:44:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:55.262 13:44:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:55.520 13:44:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:56.452 13:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:56.452 13:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:56.452 13:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.452 13:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:56.735 13:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.735 13:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:56.735 13:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:56.735 13:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.998 13:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.998 13:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:56.998 13:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.998 13:44:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:57.257 13:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.257 13:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:57.257 13:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:57.257 13:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.516 13:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.516 13:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:57.516 13:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.516 13:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:57.773 13:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.773 13:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:57.773 13:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.773 13:44:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:58.030 13:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.030 13:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:58.030 13:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:58.288 13:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:58.548 13:44:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:59.922 13:44:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:59.922 13:44:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:59.922 13:44:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.922 13:44:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:59.922 13:44:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:59.922 13:44:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:59.922 13:44:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.922 13:44:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:00.180 13:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.180 13:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:00.180 13:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:00.180 13:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.439 13:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.439 13:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:00.439 13:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.439 13:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:00.697 13:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.697 13:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:00.697 13:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.697 13:44:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:00.955 13:44:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.955 13:44:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:00.955 13:44:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.955 13:44:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:01.214 13:44:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.214 13:44:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:01.214 13:44:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:01.484 13:44:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:01.742 13:44:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:03.118 13:44:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:03.118 13:44:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:03.118 13:44:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.118 13:44:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:03.118 13:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.118 13:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:03.118 13:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.118 13:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:03.375 13:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.375 13:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:03.375 13:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:03.375 13:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.633 13:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.633 13:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:03.633 13:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.633 13:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:03.891 13:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.891 13:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:03.891 13:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.891 13:44:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:04.149 13:44:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.149 13:44:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:04.149 13:44:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:04.149 13:44:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.407 13:44:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.407 13:44:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:04.407 13:44:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:04.667 13:44:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:04.924 13:44:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:05.880 13:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:05.880 13:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:05.880 13:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.880 13:44:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:06.446 13:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.446 13:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:06.446 13:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:06.446 13:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.705 13:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:06.705 13:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:06.705 13:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:06.705 13:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.963 13:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.963 13:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:06.963 13:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.963 13:44:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:07.221 13:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.221 13:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:07.221 13:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:07.221 13:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.480 13:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.480 13:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:07.480 13:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.480 13:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:07.738 13:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:07.738 13:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 92221 00:26:07.738 13:44:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 92221 ']' 00:26:07.738 13:44:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 92221 00:26:07.738 13:44:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:26:07.738 13:44:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:07.738 13:44:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92221 00:26:07.738 killing process with pid 92221 00:26:07.738 13:44:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:26:07.738 13:44:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:26:07.738 13:44:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92221' 00:26:07.738 13:44:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 92221 00:26:07.738 13:44:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 92221 00:26:07.738 Connection closed with partial response: 00:26:07.738 00:26:07.738 00:26:08.004 13:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 92221 00:26:08.005 13:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:08.005 [2024-05-15 13:43:44.516074] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:26:08.005 [2024-05-15 13:43:44.516211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92221 ] 00:26:08.005 [2024-05-15 13:43:44.644607] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:08.005 [2024-05-15 13:43:44.659546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.005 [2024-05-15 13:43:44.717044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:08.005 Running I/O for 90 seconds... 00:26:08.005 [2024-05-15 13:44:01.370232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.005 [2024-05-15 13:44:01.370333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.370370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.005 [2024-05-15 13:44:01.370387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.370410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.005 [2024-05-15 13:44:01.370425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.370448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.005 [2024-05-15 13:44:01.370463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.370486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.005 [2024-05-15 13:44:01.370502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.370524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.005 [2024-05-15 13:44:01.370539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.370561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.005 [2024-05-15 13:44:01.370577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.370599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.005 [2024-05-15 13:44:01.370614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.370636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.005 [2024-05-15 13:44:01.370651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.370673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.005 [2024-05-15 13:44:01.370688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.370734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.005 [2024-05-15 13:44:01.370750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.370771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.005 [2024-05-15 13:44:01.370786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.370808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.005 [2024-05-15 13:44:01.370823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.370844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.005 [2024-05-15 13:44:01.370859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.370881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.005 [2024-05-15 13:44:01.370897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.370920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.005 [2024-05-15 13:44:01.370935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.370957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.005 [2024-05-15 13:44:01.370972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.370993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.005 [2024-05-15 13:44:01.371010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.371032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.005 [2024-05-15 13:44:01.371046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.371068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.005 [2024-05-15 13:44:01.371083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.371104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.005 [2024-05-15 13:44:01.371119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.371141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.005 [2024-05-15 13:44:01.371156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.371177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.005 [2024-05-15 13:44:01.371200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.371222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.005 [2024-05-15 13:44:01.371237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.371272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.005 [2024-05-15 13:44:01.371288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.371310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.005 [2024-05-15 13:44:01.371325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.371348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.005 [2024-05-15 13:44:01.371364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.371385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.005 [2024-05-15 13:44:01.371400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.371422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.005 [2024-05-15 13:44:01.371437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.371459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.005 [2024-05-15 13:44:01.371474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.371496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.005 [2024-05-15 13:44:01.371511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.371533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.005 [2024-05-15 13:44:01.371548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.371570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.005 [2024-05-15 13:44:01.371585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.371607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.005 [2024-05-15 13:44:01.371623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.371645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.005 [2024-05-15 13:44:01.371667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.371689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.005 [2024-05-15 13:44:01.371704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:08.005 [2024-05-15 13:44:01.371725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.006 [2024-05-15 13:44:01.371740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.371762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.006 [2024-05-15 13:44:01.371777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.371799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.006 [2024-05-15 13:44:01.371814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.371836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.006 [2024-05-15 13:44:01.371851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.371884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.006 [2024-05-15 13:44:01.371901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.371922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.006 [2024-05-15 13:44:01.371937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.371959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.006 [2024-05-15 13:44:01.371974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.371996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.006 [2024-05-15 13:44:01.372011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.006 [2024-05-15 13:44:01.372048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.006 [2024-05-15 13:44:01.372085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.006 [2024-05-15 13:44:01.372122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.006 [2024-05-15 13:44:01.372166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.006 [2024-05-15 13:44:01.372203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.006 [2024-05-15 13:44:01.372249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.006 [2024-05-15 13:44:01.372287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.006 [2024-05-15 13:44:01.372342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.006 [2024-05-15 13:44:01.372380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.006 [2024-05-15 13:44:01.372418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.006 [2024-05-15 13:44:01.372456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.006 [2024-05-15 13:44:01.372494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.006 [2024-05-15 13:44:01.372532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.006 [2024-05-15 13:44:01.372571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.006 [2024-05-15 13:44:01.372609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.006 [2024-05-15 13:44:01.372653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.006 [2024-05-15 13:44:01.372694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.006 [2024-05-15 13:44:01.372732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.006 [2024-05-15 13:44:01.372770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.006 [2024-05-15 13:44:01.372809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.006 [2024-05-15 13:44:01.372850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.006 [2024-05-15 13:44:01.372888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.006 [2024-05-15 13:44:01.372926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.006 [2024-05-15 13:44:01.372964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.372986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.006 [2024-05-15 13:44:01.373001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.373024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.006 [2024-05-15 13:44:01.373039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.373062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.006 [2024-05-15 13:44:01.373078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.373100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.006 [2024-05-15 13:44:01.373121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.373144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.006 [2024-05-15 13:44:01.373160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.373183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.006 [2024-05-15 13:44:01.373198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.373221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.006 [2024-05-15 13:44:01.373236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.373267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.006 [2024-05-15 13:44:01.373283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:08.006 [2024-05-15 13:44:01.373306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.373323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.373345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.373361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.373384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.373400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.373434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.373449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.373470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.373485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.373507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.373522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.373543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.373559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.373580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.373612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.373652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.373668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.373691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.373707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.373730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.373746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.373769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.373784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.373810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.007 [2024-05-15 13:44:01.373826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.373849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.007 [2024-05-15 13:44:01.373865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.373887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.007 [2024-05-15 13:44:01.373903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.373926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.007 [2024-05-15 13:44:01.373942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.373965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.007 [2024-05-15 13:44:01.373982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.374005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.007 [2024-05-15 13:44:01.374021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.374044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.007 [2024-05-15 13:44:01.374062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.374084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.007 [2024-05-15 13:44:01.374100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.374130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.374146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.374168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.374184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.374207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.374222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.374245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.374270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.374293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.374309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.374331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.374347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.374369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.374385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.374408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.374424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.374446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.374462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.374484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.374500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.374523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.374539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.374561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.374577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.374606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.374623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.374646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.374662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.374684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.374701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.374723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.007 [2024-05-15 13:44:01.374739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.374762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.007 [2024-05-15 13:44:01.374777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.374800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.007 [2024-05-15 13:44:01.374815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.374838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.007 [2024-05-15 13:44:01.374854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.374876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.007 [2024-05-15 13:44:01.374892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:08.007 [2024-05-15 13:44:01.374914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.008 [2024-05-15 13:44:01.374929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.374952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.008 [2024-05-15 13:44:01.374968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.374990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.008 [2024-05-15 13:44:01.375005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.375028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.008 [2024-05-15 13:44:01.375043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.375066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.008 [2024-05-15 13:44:01.375088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.375110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.008 [2024-05-15 13:44:01.375126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.375148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.008 [2024-05-15 13:44:01.375164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.375186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.008 [2024-05-15 13:44:01.375202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.375225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.008 [2024-05-15 13:44:01.375250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.375273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.008 [2024-05-15 13:44:01.375289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.375311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.008 [2024-05-15 13:44:01.375329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.376554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.008 [2024-05-15 13:44:01.376586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.376613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.008 [2024-05-15 13:44:01.376628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.376651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.008 [2024-05-15 13:44:01.376666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.376688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.008 [2024-05-15 13:44:01.376703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.376725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.008 [2024-05-15 13:44:01.376740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.376762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.008 [2024-05-15 13:44:01.376788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.376810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.008 [2024-05-15 13:44:01.376826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.376848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.008 [2024-05-15 13:44:01.376863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.377232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.008 [2024-05-15 13:44:01.377266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.377290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.008 [2024-05-15 13:44:01.377305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.377327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.008 [2024-05-15 13:44:01.377342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.377364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.008 [2024-05-15 13:44:01.377379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.377401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.008 [2024-05-15 13:44:01.377416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.377438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.008 [2024-05-15 13:44:01.377453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.377475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.008 [2024-05-15 13:44:01.377490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.377511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.008 [2024-05-15 13:44:01.377527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.377548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.008 [2024-05-15 13:44:01.377564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.377585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.008 [2024-05-15 13:44:01.377637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.377661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.008 [2024-05-15 13:44:01.377677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.377699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.008 [2024-05-15 13:44:01.377715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.377737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.008 [2024-05-15 13:44:01.377753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.377775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.008 [2024-05-15 13:44:01.377791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.377813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.008 [2024-05-15 13:44:01.377829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.377852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.008 [2024-05-15 13:44:01.377868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.377890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.008 [2024-05-15 13:44:01.377906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.377928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.008 [2024-05-15 13:44:01.377944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.377966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.008 [2024-05-15 13:44:01.377982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.378004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.008 [2024-05-15 13:44:01.378020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.378043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.008 [2024-05-15 13:44:01.378058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:08.008 [2024-05-15 13:44:01.378081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.009 [2024-05-15 13:44:01.378097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.378125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.009 [2024-05-15 13:44:01.378141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.378171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.009 [2024-05-15 13:44:01.378188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.378213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.009 [2024-05-15 13:44:01.378230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.378263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 13:44:01.378279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.378302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 13:44:01.378318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.378340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 13:44:01.378356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.378378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 13:44:01.378393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.378416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 13:44:01.378432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.378454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 13:44:01.378470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.378492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 13:44:01.378507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.378530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 13:44:01.378545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.378567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.009 [2024-05-15 13:44:01.378583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.378611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.009 [2024-05-15 13:44:01.378627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.378650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.009 [2024-05-15 13:44:01.378666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.378688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.009 [2024-05-15 13:44:01.378704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.378727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.009 [2024-05-15 13:44:01.378743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.378776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.009 [2024-05-15 13:44:01.378791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.378812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.009 [2024-05-15 13:44:01.378828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.378852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.009 [2024-05-15 13:44:01.378868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.378889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 13:44:01.378905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.378926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 13:44:01.378942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.378963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 13:44:01.378978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.379000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 13:44:01.379015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.379037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 13:44:01.379052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.379074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 13:44:01.379099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.379121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 13:44:01.379136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.379158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 13:44:01.379173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.379195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 13:44:01.379210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.379231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 13:44:01.379254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:08.009 [2024-05-15 13:44:01.379276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.009 [2024-05-15 13:44:01.379291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.379313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 13:44:01.379328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.379350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 13:44:01.379366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.379387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 13:44:01.379402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.379424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 13:44:01.379439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.379461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 13:44:01.379476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.379497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.010 [2024-05-15 13:44:01.379512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.379534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.010 [2024-05-15 13:44:01.379555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.379577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.010 [2024-05-15 13:44:01.379592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.379613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.010 [2024-05-15 13:44:01.379628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.379650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.010 [2024-05-15 13:44:01.379665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.379687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.010 [2024-05-15 13:44:01.379704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.379725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.010 [2024-05-15 13:44:01.379741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.379766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.010 [2024-05-15 13:44:01.379781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.379802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 13:44:01.379817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.379839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 13:44:01.379854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.379876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 13:44:01.379891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.379913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 13:44:01.379928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.379951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 13:44:01.379966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.379988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 13:44:01.380003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.380030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 13:44:01.380045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.380067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 13:44:01.380082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.380104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 13:44:01.380119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.380141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 13:44:01.380156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.380178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 13:44:01.380193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.380215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 13:44:01.380230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.380260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 13:44:01.380275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.380297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 13:44:01.380314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.380335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 13:44:01.380351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.380372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.010 [2024-05-15 13:44:01.380387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.380408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.010 [2024-05-15 13:44:01.380424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.380445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.010 [2024-05-15 13:44:01.380460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.380488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.010 [2024-05-15 13:44:01.380503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.380525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.010 [2024-05-15 13:44:01.380540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.380563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.010 [2024-05-15 13:44:01.380578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:08.010 [2024-05-15 13:44:01.380600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.011 [2024-05-15 13:44:01.380615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.380637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.011 [2024-05-15 13:44:01.380652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.380676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.011 [2024-05-15 13:44:01.380692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.380713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.011 [2024-05-15 13:44:01.380728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.380750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.011 [2024-05-15 13:44:01.380765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.380786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.011 [2024-05-15 13:44:01.380801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.380823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.011 [2024-05-15 13:44:01.380839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.380860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.011 [2024-05-15 13:44:01.380875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.380897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.011 [2024-05-15 13:44:01.380913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.380935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.011 [2024-05-15 13:44:01.380955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.380977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.011 [2024-05-15 13:44:01.380992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.381013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.011 [2024-05-15 13:44:01.381028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.381050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.011 [2024-05-15 13:44:01.381065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.387823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.011 [2024-05-15 13:44:01.387867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.387892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.011 [2024-05-15 13:44:01.387908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.387932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.011 [2024-05-15 13:44:01.387947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.387969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.011 [2024-05-15 13:44:01.387984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.388006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.011 [2024-05-15 13:44:01.388022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.388044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.011 [2024-05-15 13:44:01.388059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.388081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.011 [2024-05-15 13:44:01.388096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.388118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.011 [2024-05-15 13:44:01.388134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.388156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.011 [2024-05-15 13:44:01.388186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.388208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.011 [2024-05-15 13:44:01.388223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.388259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.011 [2024-05-15 13:44:01.388275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.388296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.011 [2024-05-15 13:44:01.388312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.388334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.011 [2024-05-15 13:44:01.388349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.388371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.011 [2024-05-15 13:44:01.388386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.388408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.011 [2024-05-15 13:44:01.388423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.388445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.011 [2024-05-15 13:44:01.388461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.388483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.011 [2024-05-15 13:44:01.388499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.388520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.011 [2024-05-15 13:44:01.388535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.388558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.011 [2024-05-15 13:44:01.388573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.388595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.011 [2024-05-15 13:44:01.388610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.388632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.011 [2024-05-15 13:44:01.388647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.388694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.011 [2024-05-15 13:44:01.388709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.388731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.011 [2024-05-15 13:44:01.388747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.388768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.011 [2024-05-15 13:44:01.388784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.388806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.011 [2024-05-15 13:44:01.388821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.388843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.011 [2024-05-15 13:44:01.388858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.388879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.011 [2024-05-15 13:44:01.388895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:08.011 [2024-05-15 13:44:01.388917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.012 [2024-05-15 13:44:01.388932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.390695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.012 [2024-05-15 13:44:01.390737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.390764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.012 [2024-05-15 13:44:01.390781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.390830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.012 [2024-05-15 13:44:01.390846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.390868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.012 [2024-05-15 13:44:01.390883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.390905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.012 [2024-05-15 13:44:01.390920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.390952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.012 [2024-05-15 13:44:01.390968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.390990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.012 [2024-05-15 13:44:01.391005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.391027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.012 [2024-05-15 13:44:01.391042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.391063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.012 [2024-05-15 13:44:01.391078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.391100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.012 [2024-05-15 13:44:01.391115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.391137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.012 [2024-05-15 13:44:01.391152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.391174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.012 [2024-05-15 13:44:01.391189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.391211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.012 [2024-05-15 13:44:01.391226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.391248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.012 [2024-05-15 13:44:01.391263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.391294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.012 [2024-05-15 13:44:01.391310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.391331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.012 [2024-05-15 13:44:01.391346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.391368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.012 [2024-05-15 13:44:01.391383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.391412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.012 [2024-05-15 13:44:01.391427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.391449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.012 [2024-05-15 13:44:01.391464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.391486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.012 [2024-05-15 13:44:01.391501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.391522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.012 [2024-05-15 13:44:01.391538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.391559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.012 [2024-05-15 13:44:01.391574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.391596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.012 [2024-05-15 13:44:01.391611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.391636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.012 [2024-05-15 13:44:01.391651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.391673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.012 [2024-05-15 13:44:01.391688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.391710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.012 [2024-05-15 13:44:01.391725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.391748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.012 [2024-05-15 13:44:01.391764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.391785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.012 [2024-05-15 13:44:01.391800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.391822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.012 [2024-05-15 13:44:01.391838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.391860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.012 [2024-05-15 13:44:01.391881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:08.012 [2024-05-15 13:44:01.391903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.012 [2024-05-15 13:44:01.391918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.391940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.013 [2024-05-15 13:44:01.391955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.391977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.013 [2024-05-15 13:44:01.391992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.013 [2024-05-15 13:44:01.392028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.013 [2024-05-15 13:44:01.392064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.013 [2024-05-15 13:44:01.392101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.013 [2024-05-15 13:44:01.392137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.013 [2024-05-15 13:44:01.392173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.013 [2024-05-15 13:44:01.392209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.013 [2024-05-15 13:44:01.392260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.013 [2024-05-15 13:44:01.392296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.013 [2024-05-15 13:44:01.392338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.013 [2024-05-15 13:44:01.392376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.013 [2024-05-15 13:44:01.392412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.013 [2024-05-15 13:44:01.392449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.013 [2024-05-15 13:44:01.392485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.013 [2024-05-15 13:44:01.392521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.013 [2024-05-15 13:44:01.392558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.013 [2024-05-15 13:44:01.392594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.013 [2024-05-15 13:44:01.392630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.013 [2024-05-15 13:44:01.392667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.013 [2024-05-15 13:44:01.392703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.013 [2024-05-15 13:44:01.392739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.013 [2024-05-15 13:44:01.392776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.013 [2024-05-15 13:44:01.392818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.013 [2024-05-15 13:44:01.392855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.013 [2024-05-15 13:44:01.392891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.013 [2024-05-15 13:44:01.392928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.013 [2024-05-15 13:44:01.392965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.392987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.013 [2024-05-15 13:44:01.393002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.393024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.013 [2024-05-15 13:44:01.393039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.393060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.013 [2024-05-15 13:44:01.393075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.393097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.013 [2024-05-15 13:44:01.393112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.393136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.013 [2024-05-15 13:44:01.393152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.393173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.013 [2024-05-15 13:44:01.393188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.393210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.013 [2024-05-15 13:44:01.393225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.393262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.013 [2024-05-15 13:44:01.393278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:08.013 [2024-05-15 13:44:01.393300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.014 [2024-05-15 13:44:01.393315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.393337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.014 [2024-05-15 13:44:01.393352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.393374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.014 [2024-05-15 13:44:01.393389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.393410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.014 [2024-05-15 13:44:01.393425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.393447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.014 [2024-05-15 13:44:01.393462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.393483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.014 [2024-05-15 13:44:01.393499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.393520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.014 [2024-05-15 13:44:01.393535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.393557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.014 [2024-05-15 13:44:01.393572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.393594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.014 [2024-05-15 13:44:01.393620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.393659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.014 [2024-05-15 13:44:01.393675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.393697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.014 [2024-05-15 13:44:01.393713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.393735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.014 [2024-05-15 13:44:01.393758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.393780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.014 [2024-05-15 13:44:01.393796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.393818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.014 [2024-05-15 13:44:01.393834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.393856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.014 [2024-05-15 13:44:01.393872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.393895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.014 [2024-05-15 13:44:01.393910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.393933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.014 [2024-05-15 13:44:01.393949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.393971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.014 [2024-05-15 13:44:01.393987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.394009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.014 [2024-05-15 13:44:01.394024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.394047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.014 [2024-05-15 13:44:01.394063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.394088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.014 [2024-05-15 13:44:01.394104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.394141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.014 [2024-05-15 13:44:01.394159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.394184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.014 [2024-05-15 13:44:01.394201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.394226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.014 [2024-05-15 13:44:01.394250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.394286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.014 [2024-05-15 13:44:01.394305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.394330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.014 [2024-05-15 13:44:01.394348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:08.014 [2024-05-15 13:44:01.394372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.015 [2024-05-15 13:44:01.394390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.394416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.015 [2024-05-15 13:44:01.394433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.394458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.015 [2024-05-15 13:44:01.394476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.394501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.015 [2024-05-15 13:44:01.394518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.394544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.015 [2024-05-15 13:44:01.394561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.394586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.015 [2024-05-15 13:44:01.394603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.394629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.015 [2024-05-15 13:44:01.394646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.394671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.015 [2024-05-15 13:44:01.394689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.394714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.015 [2024-05-15 13:44:01.394738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.394763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.015 [2024-05-15 13:44:01.394781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.394817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.015 [2024-05-15 13:44:01.394834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.394859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.015 [2024-05-15 13:44:01.394877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.394902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.015 [2024-05-15 13:44:01.394920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.394945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.015 [2024-05-15 13:44:01.394962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.394988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.015 [2024-05-15 13:44:01.395005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.395030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.015 [2024-05-15 13:44:01.395048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.395073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.015 [2024-05-15 13:44:01.395090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.395115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.015 [2024-05-15 13:44:01.395133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.395161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.015 [2024-05-15 13:44:01.395179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.395204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.015 [2024-05-15 13:44:01.395221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.395256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.015 [2024-05-15 13:44:01.395274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.395299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.015 [2024-05-15 13:44:01.395317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.395349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.015 [2024-05-15 13:44:01.395366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.395391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.015 [2024-05-15 13:44:01.395409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.395434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.015 [2024-05-15 13:44:01.395451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.395476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.015 [2024-05-15 13:44:01.395493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.395518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.015 [2024-05-15 13:44:01.395536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.395561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.015 [2024-05-15 13:44:01.395579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.395604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.015 [2024-05-15 13:44:01.395621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.395646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.015 [2024-05-15 13:44:01.395664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.395688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.015 [2024-05-15 13:44:01.395706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.395731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.015 [2024-05-15 13:44:01.395749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.395774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.015 [2024-05-15 13:44:01.395791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.395816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.015 [2024-05-15 13:44:01.395833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.397556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.015 [2024-05-15 13:44:01.397608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.397647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.015 [2024-05-15 13:44:01.397665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.397690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.015 [2024-05-15 13:44:01.397708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.397733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.015 [2024-05-15 13:44:01.397750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:08.015 [2024-05-15 13:44:01.397775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.016 [2024-05-15 13:44:01.397793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.397817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.016 [2024-05-15 13:44:01.397835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.397861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.016 [2024-05-15 13:44:01.397878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.397903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.016 [2024-05-15 13:44:01.397920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.397945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.016 [2024-05-15 13:44:01.397964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.397989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.016 [2024-05-15 13:44:01.398006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.398031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.016 [2024-05-15 13:44:01.398048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.398073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.016 [2024-05-15 13:44:01.398091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.398116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.016 [2024-05-15 13:44:01.398141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.398166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.016 [2024-05-15 13:44:01.398183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.398208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.016 [2024-05-15 13:44:01.398226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.398261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.016 [2024-05-15 13:44:01.398278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.398303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.016 [2024-05-15 13:44:01.398321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.398346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.016 [2024-05-15 13:44:01.398364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.398389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.016 [2024-05-15 13:44:01.398406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.398431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.016 [2024-05-15 13:44:01.398449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.398474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.016 [2024-05-15 13:44:01.398492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.398516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.016 [2024-05-15 13:44:01.398534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.398559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.016 [2024-05-15 13:44:01.398577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.398605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.016 [2024-05-15 13:44:01.398623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.398648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.016 [2024-05-15 13:44:01.398665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.398698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.016 [2024-05-15 13:44:01.398715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.398748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.016 [2024-05-15 13:44:01.398765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.398791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.016 [2024-05-15 13:44:01.398808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.398833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.016 [2024-05-15 13:44:01.398850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.398875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.016 [2024-05-15 13:44:01.398893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.398918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.016 [2024-05-15 13:44:01.398935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.398960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.016 [2024-05-15 13:44:01.398977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.399003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.016 [2024-05-15 13:44:01.399020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.399045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.016 [2024-05-15 13:44:01.399062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.399087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.016 [2024-05-15 13:44:01.399104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:08.016 [2024-05-15 13:44:01.399130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.016 [2024-05-15 13:44:01.399147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.399172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.017 [2024-05-15 13:44:01.399189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.399221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.017 [2024-05-15 13:44:01.399249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.399275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.017 [2024-05-15 13:44:01.399293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.399321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.017 [2024-05-15 13:44:01.399339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.399364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.017 [2024-05-15 13:44:01.399381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.399406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.017 [2024-05-15 13:44:01.399424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.399449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.017 [2024-05-15 13:44:01.399466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.399492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.017 [2024-05-15 13:44:01.399509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.399534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.017 [2024-05-15 13:44:01.399552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.399577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.017 [2024-05-15 13:44:01.399594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.399619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.017 [2024-05-15 13:44:01.399637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.399662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.017 [2024-05-15 13:44:01.399679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.399704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.017 [2024-05-15 13:44:01.399722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.399753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.017 [2024-05-15 13:44:01.399771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.399796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.017 [2024-05-15 13:44:01.399814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.399839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.017 [2024-05-15 13:44:01.399856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.399881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.017 [2024-05-15 13:44:01.399899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.399924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.017 [2024-05-15 13:44:01.399942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.399967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.017 [2024-05-15 13:44:01.399984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.400010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.017 [2024-05-15 13:44:01.400028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.400052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.017 [2024-05-15 13:44:01.400070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.400095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.017 [2024-05-15 13:44:01.400112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.400138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.017 [2024-05-15 13:44:01.400155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.400180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.017 [2024-05-15 13:44:01.400197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.400222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.017 [2024-05-15 13:44:01.400250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.400276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.017 [2024-05-15 13:44:01.400300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.400325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.017 [2024-05-15 13:44:01.400343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.400371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.017 [2024-05-15 13:44:01.400389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.400414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.017 [2024-05-15 13:44:01.400431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.400456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.017 [2024-05-15 13:44:01.400474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.400499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.017 [2024-05-15 13:44:01.400516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.400541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.017 [2024-05-15 13:44:01.400559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.400583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.017 [2024-05-15 13:44:01.400601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.400626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.017 [2024-05-15 13:44:01.400644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.400669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.017 [2024-05-15 13:44:01.400687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.400712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.017 [2024-05-15 13:44:01.400729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.400754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.017 [2024-05-15 13:44:01.400772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.400797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.017 [2024-05-15 13:44:01.400821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:08.017 [2024-05-15 13:44:01.400846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.017 [2024-05-15 13:44:01.400863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.400888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.018 [2024-05-15 13:44:01.400906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.400931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.018 [2024-05-15 13:44:01.400948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.400973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.018 [2024-05-15 13:44:01.400991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.401016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.018 [2024-05-15 13:44:01.401034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.401059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.018 [2024-05-15 13:44:01.401076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.401101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.018 [2024-05-15 13:44:01.401119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.401143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.018 [2024-05-15 13:44:01.401161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.401186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.018 [2024-05-15 13:44:01.401203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.401228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.018 [2024-05-15 13:44:01.401257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.401282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.018 [2024-05-15 13:44:01.401300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.401325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.018 [2024-05-15 13:44:01.401342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.401374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.018 [2024-05-15 13:44:01.401392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.401420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.018 [2024-05-15 13:44:01.401438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.401464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.018 [2024-05-15 13:44:01.401481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.401506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.018 [2024-05-15 13:44:01.401524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.401549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.018 [2024-05-15 13:44:01.401567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.401592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.018 [2024-05-15 13:44:01.401618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.401648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.018 [2024-05-15 13:44:01.401665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.401690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.018 [2024-05-15 13:44:01.401708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.401732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.018 [2024-05-15 13:44:01.401750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.401775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.018 [2024-05-15 13:44:01.401793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.401817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.018 [2024-05-15 13:44:01.401835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.401860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.018 [2024-05-15 13:44:01.401878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.401909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.018 [2024-05-15 13:44:01.401927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.401951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.018 [2024-05-15 13:44:01.401969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.401994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.018 [2024-05-15 13:44:01.402012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.402037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.018 [2024-05-15 13:44:01.402054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.402079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.018 [2024-05-15 13:44:01.402097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.402122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.018 [2024-05-15 13:44:01.402140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.402165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.018 [2024-05-15 13:44:01.402182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.402207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.018 [2024-05-15 13:44:01.402225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.402259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.018 [2024-05-15 13:44:01.402277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.402302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.018 [2024-05-15 13:44:01.402319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.402344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.018 [2024-05-15 13:44:01.402362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.402386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.018 [2024-05-15 13:44:01.402404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.402429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.018 [2024-05-15 13:44:01.402453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.402478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.018 [2024-05-15 13:44:01.402495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.402520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.018 [2024-05-15 13:44:01.402537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:08.018 [2024-05-15 13:44:01.402562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.019 [2024-05-15 13:44:01.402580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:01.402605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.019 [2024-05-15 13:44:01.402622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:01.402647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.019 [2024-05-15 13:44:01.402665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:01.402690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.019 [2024-05-15 13:44:01.402707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:01.402732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.019 [2024-05-15 13:44:01.402749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:01.402774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.019 [2024-05-15 13:44:01.402792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:01.402817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.019 [2024-05-15 13:44:01.402834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:01.402859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.019 [2024-05-15 13:44:01.402877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:01.402902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.019 [2024-05-15 13:44:01.402919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:01.402944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.019 [2024-05-15 13:44:01.402968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:01.402993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.019 [2024-05-15 13:44:01.403010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:01.403036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.019 [2024-05-15 13:44:01.403053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:01.403078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.019 [2024-05-15 13:44:01.403095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:01.403120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.019 [2024-05-15 13:44:01.403139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:01.403644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.019 [2024-05-15 13:44:01.403670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:17.862120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.019 [2024-05-15 13:44:17.862207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:17.862280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.019 [2024-05-15 13:44:17.862304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:17.862340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:52568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.019 [2024-05-15 13:44:17.862365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:17.862392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:52584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.019 [2024-05-15 13:44:17.862408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:17.862431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:52600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.019 [2024-05-15 13:44:17.862446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:17.862469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:51848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.019 [2024-05-15 13:44:17.862484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:17.862507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.019 [2024-05-15 13:44:17.862523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:17.862811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.019 [2024-05-15 13:44:17.862832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:17.862856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:52624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.019 [2024-05-15 13:44:17.862871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:17.862893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:52344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.019 [2024-05-15 13:44:17.862909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:17.862931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:52376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.019 [2024-05-15 13:44:17.862946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:17.862969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.019 [2024-05-15 13:44:17.862985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:17.863008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.019 [2024-05-15 13:44:17.863023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:17.863045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:51928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.019 [2024-05-15 13:44:17.863061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:17.863084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.019 [2024-05-15 13:44:17.863099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:17.863121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.019 [2024-05-15 13:44:17.863136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:17.863159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.019 [2024-05-15 13:44:17.863174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:17.863196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.019 [2024-05-15 13:44:17.863212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:17.863234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:52696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.019 [2024-05-15 13:44:17.863263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:08.019 [2024-05-15 13:44:17.863296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.019 [2024-05-15 13:44:17.863312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.863335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.020 [2024-05-15 13:44:17.863352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.863375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:52008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.020 [2024-05-15 13:44:17.863391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.863414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.020 [2024-05-15 13:44:17.863429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.863472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.020 [2024-05-15 13:44:17.863490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.863512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.020 [2024-05-15 13:44:17.863528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.863551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.020 [2024-05-15 13:44:17.863567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.863589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.020 [2024-05-15 13:44:17.863605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.863627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.020 [2024-05-15 13:44:17.863644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.863667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.020 [2024-05-15 13:44:17.863683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.863705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.020 [2024-05-15 13:44:17.863721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.863743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.020 [2024-05-15 13:44:17.863759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.863782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:52768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.020 [2024-05-15 13:44:17.863805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.863828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:52784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.020 [2024-05-15 13:44:17.863843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.863866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.020 [2024-05-15 13:44:17.863881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.863904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:52120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.020 [2024-05-15 13:44:17.863920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.863943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.020 [2024-05-15 13:44:17.863959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.863981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.020 [2024-05-15 13:44:17.863997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.864020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:52832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.020 [2024-05-15 13:44:17.864035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.864062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.020 [2024-05-15 13:44:17.864078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.864101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:52864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.020 [2024-05-15 13:44:17.864117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.864139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:52880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.020 [2024-05-15 13:44:17.864155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.864178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.020 [2024-05-15 13:44:17.864194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.864217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:52168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.020 [2024-05-15 13:44:17.864233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.864267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.020 [2024-05-15 13:44:17.864290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.864313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.020 [2024-05-15 13:44:17.864329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.864352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.020 [2024-05-15 13:44:17.864367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.864390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:52928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.020 [2024-05-15 13:44:17.864405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.864428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.020 [2024-05-15 13:44:17.864444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.864466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:52216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.020 [2024-05-15 13:44:17.864482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.864504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:52248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.020 [2024-05-15 13:44:17.864520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.864543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.020 [2024-05-15 13:44:17.864558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.864581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.020 [2024-05-15 13:44:17.864597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.864620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.020 [2024-05-15 13:44:17.864636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.864670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:52672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.020 [2024-05-15 13:44:17.864685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.864707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:52704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.020 [2024-05-15 13:44:17.864723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:08.020 [2024-05-15 13:44:17.864763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.020 [2024-05-15 13:44:17.864779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.864820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:52976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.021 [2024-05-15 13:44:17.864838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.864861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.021 [2024-05-15 13:44:17.864877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.864899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:53008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.021 [2024-05-15 13:44:17.864915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.864938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:53024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.021 [2024-05-15 13:44:17.864954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.864977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:53040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.021 [2024-05-15 13:44:17.864993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.865015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.021 [2024-05-15 13:44:17.865031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.865054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:53072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.021 [2024-05-15 13:44:17.865070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.865092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:53088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.021 [2024-05-15 13:44:17.865108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.865131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:52744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.021 [2024-05-15 13:44:17.865146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.865169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:52776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.021 [2024-05-15 13:44:17.865184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.865210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:53104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.021 [2024-05-15 13:44:17.865226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.865249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:52808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.021 [2024-05-15 13:44:17.865274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.865305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.021 [2024-05-15 13:44:17.865321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.865345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.021 [2024-05-15 13:44:17.865361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.865384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.021 [2024-05-15 13:44:17.865399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.865422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:53128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.021 [2024-05-15 13:44:17.865438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.865461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:53144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.021 [2024-05-15 13:44:17.865477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.865500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.021 [2024-05-15 13:44:17.865516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.865539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:52368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.021 [2024-05-15 13:44:17.865554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.865577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:52400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.021 [2024-05-15 13:44:17.865593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.865636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:52432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.021 [2024-05-15 13:44:17.865657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.865683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:52464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.021 [2024-05-15 13:44:17.865703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.865729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:53176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.021 [2024-05-15 13:44:17.865748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.865774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:52920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.021 [2024-05-15 13:44:17.865793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.865826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:52952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.021 [2024-05-15 13:44:17.865845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.867429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.021 [2024-05-15 13:44:17.867468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.867498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:53216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.021 [2024-05-15 13:44:17.867514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.867537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:53232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.021 [2024-05-15 13:44:17.867552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:08.021 [2024-05-15 13:44:17.867574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.021 [2024-05-15 13:44:17.867590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:08.021 Received shutdown signal, test time was about 34.243519 seconds 00:26:08.021 00:26:08.021 Latency(us) 00:26:08.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.021 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:08.021 Verification LBA range: start 0x0 length 0x4000 00:26:08.021 Nvme0n1 : 34.24 9599.55 37.50 0.00 0.00 13305.01 116.05 4058488.44 00:26:08.021 =================================================================================================================== 00:26:08.021 Total : 9599.55 37.50 0.00 0.00 13305.01 116.05 4058488.44 00:26:08.021 13:44:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:08.279 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:08.279 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:08.279 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:08.279 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:08.279 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:08.279 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:08.279 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:08.279 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:08.279 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:08.279 rmmod nvme_tcp 00:26:08.279 rmmod nvme_fabrics 00:26:08.279 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:08.279 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:08.279 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:08.279 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 92171 ']' 00:26:08.279 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 92171 00:26:08.279 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 92171 ']' 00:26:08.279 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 92171 00:26:08.279 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:26:08.279 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:08.279 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92171 00:26:08.280 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:08.280 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:08.280 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92171' 00:26:08.280 killing process with pid 92171 00:26:08.280 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 92171 00:26:08.280 [2024-05-15 13:44:21.297858] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]addres 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 92171 00:26:08.280 s.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:08.535 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:08.535 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:08.535 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:08.535 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:08.535 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:08.535 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.535 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:08.535 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.535 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:08.535 ************************************ 00:26:08.535 END TEST nvmf_host_multipath_status 00:26:08.535 ************************************ 00:26:08.535 00:26:08.535 real 0m40.309s 00:26:08.535 user 2m7.657s 00:26:08.535 sys 0m14.180s 00:26:08.535 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:08.535 13:44:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:08.535 13:44:21 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:08.535 13:44:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:08.535 13:44:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:08.535 13:44:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:08.535 ************************************ 00:26:08.535 START TEST nvmf_discovery_remove_ifc 00:26:08.535 ************************************ 00:26:08.535 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:08.840 * Looking for test storage... 00:26:08.840 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:08.840 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:08.841 Cannot find device "nvmf_tgt_br" 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:08.841 Cannot find device "nvmf_tgt_br2" 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:08.841 Cannot find device "nvmf_tgt_br" 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:08.841 Cannot find device "nvmf_tgt_br2" 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:08.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:08.841 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:08.841 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:09.098 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:09.098 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:09.098 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:09.098 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:09.098 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:09.098 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:09.098 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:09.098 13:44:21 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:09.098 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:09.098 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:09.098 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:09.098 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:09.098 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:09.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:09.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:26:09.099 00:26:09.099 --- 10.0.0.2 ping statistics --- 00:26:09.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.099 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:09.099 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:09.099 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:26:09.099 00:26:09.099 --- 10.0.0.3 ping statistics --- 00:26:09.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.099 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:09.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:09.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:26:09.099 00:26:09.099 --- 10.0.0.1 ping statistics --- 00:26:09.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:09.099 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=93006 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 93006 00:26:09.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 93006 ']' 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:09.099 13:44:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:09.099 [2024-05-15 13:44:22.191620] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:26:09.099 [2024-05-15 13:44:22.191941] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:09.357 [2024-05-15 13:44:22.315009] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:09.357 [2024-05-15 13:44:22.331377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.357 [2024-05-15 13:44:22.405486] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:09.357 [2024-05-15 13:44:22.405832] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:09.357 [2024-05-15 13:44:22.406006] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:09.357 [2024-05-15 13:44:22.406154] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:09.357 [2024-05-15 13:44:22.406332] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:09.357 [2024-05-15 13:44:22.406421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.289 13:44:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:10.289 13:44:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:26:10.289 13:44:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:10.289 13:44:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:10.289 13:44:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.289 13:44:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:10.289 13:44:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:10.289 13:44:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.289 13:44:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.289 [2024-05-15 13:44:23.212393] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:10.289 [2024-05-15 13:44:23.220317] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:10.289 [2024-05-15 13:44:23.220785] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:10.289 null0 00:26:10.289 [2024-05-15 13:44:23.252519] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.289 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:10.289 13:44:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.289 13:44:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=93038 00:26:10.289 13:44:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 93038 /tmp/host.sock 00:26:10.289 13:44:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 93038 ']' 00:26:10.289 13:44:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:26:10.289 13:44:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:10.289 13:44:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:10.289 13:44:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:10.289 13:44:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:10.289 13:44:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:10.290 [2024-05-15 13:44:23.329032] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:26:10.290 [2024-05-15 13:44:23.329551] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93038 ] 00:26:10.549 [2024-05-15 13:44:23.457921] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:10.549 [2024-05-15 13:44:23.484719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.549 [2024-05-15 13:44:23.550427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.482 13:44:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:11.482 13:44:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:26:11.482 13:44:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:11.482 13:44:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:11.482 13:44:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.482 13:44:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:11.482 13:44:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.482 13:44:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:11.482 13:44:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.482 13:44:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:11.482 13:44:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.482 13:44:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:11.482 13:44:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.482 13:44:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.424 [2024-05-15 13:44:25.447569] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:12.424 [2024-05-15 13:44:25.447844] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:12.424 [2024-05-15 13:44:25.447912] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:12.424 [2024-05-15 13:44:25.453612] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:12.424 [2024-05-15 13:44:25.510149] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:12.424 [2024-05-15 13:44:25.510508] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:12.424 [2024-05-15 13:44:25.510574] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:12.424 [2024-05-15 13:44:25.510716] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:12.424 [2024-05-15 13:44:25.510862] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:12.424 13:44:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.424 13:44:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:12.424 13:44:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:12.424 [2024-05-15 13:44:25.516247] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x198ae60 was disconnected and freed. delete nvme_qpair. 00:26:12.424 13:44:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:12.424 13:44:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.424 13:44:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:12.424 13:44:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.424 13:44:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:12.424 13:44:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:12.681 13:44:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.681 13:44:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:12.681 13:44:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:26:12.681 13:44:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:26:12.681 13:44:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:12.681 13:44:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:12.681 13:44:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:12.681 13:44:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:12.681 13:44:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.681 13:44:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.681 13:44:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:12.681 13:44:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:12.681 13:44:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.681 13:44:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:12.681 13:44:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:13.614 13:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:13.614 13:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:13.614 13:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.614 13:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:13.614 13:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.614 13:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:13.614 13:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:13.614 13:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.614 13:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:13.614 13:44:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:14.985 13:44:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:14.985 13:44:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.985 13:44:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:14.985 13:44:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.985 13:44:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:14.985 13:44:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:14.985 13:44:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:14.985 13:44:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.985 13:44:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:14.985 13:44:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:15.925 13:44:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:15.925 13:44:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.925 13:44:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:15.925 13:44:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:15.925 13:44:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:15.925 13:44:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.925 13:44:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:15.925 13:44:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.925 13:44:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:15.925 13:44:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:16.859 13:44:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:16.859 13:44:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.859 13:44:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.859 13:44:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:16.859 13:44:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:16.859 13:44:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:16.859 13:44:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:16.859 13:44:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.859 13:44:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:16.859 13:44:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:17.791 13:44:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:17.791 13:44:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:17.791 13:44:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:17.791 13:44:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:17.791 13:44:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:17.791 13:44:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.791 13:44:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.048 13:44:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.048 13:44:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:18.048 13:44:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:18.048 [2024-05-15 13:44:30.939119] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:18.048 [2024-05-15 13:44:30.939393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.048 [2024-05-15 13:44:30.939527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.048 [2024-05-15 13:44:30.939630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.048 [2024-05-15 13:44:30.939738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.048 [2024-05-15 13:44:30.939838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.048 [2024-05-15 13:44:30.939941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.048 [2024-05-15 13:44:30.940041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.048 [2024-05-15 13:44:30.940097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.048 [2024-05-15 13:44:30.940306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:18.048 [2024-05-15 13:44:30.940361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:18.048 [2024-05-15 13:44:30.940415] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194fe70 is same with the state(5) to be set 00:26:18.048 [2024-05-15 13:44:30.949115] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194fe70 (9): Bad file descriptor 00:26:18.048 [2024-05-15 13:44:30.959142] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:18.982 13:44:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:18.982 13:44:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:18.982 13:44:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:18.982 13:44:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:18.982 13:44:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.982 13:44:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.982 13:44:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:18.982 [2024-05-15 13:44:31.971283] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:26:19.914 [2024-05-15 13:44:32.995330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:21.288 [2024-05-15 13:44:34.019332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:21.288 [2024-05-15 13:44:34.019491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x194fe70 with addr=10.0.0.2, port=4420 00:26:21.288 [2024-05-15 13:44:34.019540] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194fe70 is same with the state(5) to be set 00:26:21.288 [2024-05-15 13:44:34.020573] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194fe70 (9): Bad file descriptor 00:26:21.288 [2024-05-15 13:44:34.020673] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.288 [2024-05-15 13:44:34.020736] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:21.288 [2024-05-15 13:44:34.020824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.288 [2024-05-15 13:44:34.020875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.288 [2024-05-15 13:44:34.020912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.288 [2024-05-15 13:44:34.020942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.288 [2024-05-15 13:44:34.020973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.288 [2024-05-15 13:44:34.021003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.288 [2024-05-15 13:44:34.021035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.288 [2024-05-15 13:44:34.021065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.288 [2024-05-15 13:44:34.021097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.288 [2024-05-15 13:44:34.021126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.288 [2024-05-15 13:44:34.021155] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:21.288 [2024-05-15 13:44:34.021194] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194f470 (9): Bad file descriptor 00:26:21.288 [2024-05-15 13:44:34.021707] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:21.288 [2024-05-15 13:44:34.021762] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:21.288 13:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.288 13:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:21.288 13:44:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:22.225 13:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:22.225 13:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.225 13:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:22.225 13:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:22.225 13:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:22.225 13:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.225 13:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:22.225 13:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.225 13:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:22.225 13:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:22.225 13:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:22.225 13:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:22.225 13:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:22.225 13:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.225 13:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:22.225 13:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.225 13:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:22.225 13:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:22.225 13:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:22.225 13:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.225 13:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:22.225 13:44:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:23.158 [2024-05-15 13:44:36.034259] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:23.158 [2024-05-15 13:44:36.034531] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:23.158 [2024-05-15 13:44:36.034603] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:23.158 [2024-05-15 13:44:36.040311] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:23.158 [2024-05-15 13:44:36.095740] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:23.158 [2024-05-15 13:44:36.096060] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:23.158 [2024-05-15 13:44:36.096120] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:23.158 [2024-05-15 13:44:36.096214] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:23.158 [2024-05-15 13:44:36.096353] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:23.158 [2024-05-15 13:44:36.103051] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1999ed0 was disconnected and freed. delete nvme_qpair. 00:26:23.158 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:23.158 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:23.158 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:23.158 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:23.158 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.158 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:23.158 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:23.158 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.158 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:23.158 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:23.158 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 93038 00:26:23.158 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 93038 ']' 00:26:23.158 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 93038 00:26:23.158 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:26:23.159 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:23.159 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93038 00:26:23.417 killing process with pid 93038 00:26:23.417 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:23.417 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:23.417 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93038' 00:26:23.417 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 93038 00:26:23.417 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 93038 00:26:23.417 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:23.417 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:23.417 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:23.417 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:23.417 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:23.417 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:23.417 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:23.674 rmmod nvme_tcp 00:26:23.674 rmmod nvme_fabrics 00:26:23.674 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:23.674 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:23.674 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:23.674 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 93006 ']' 00:26:23.674 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 93006 00:26:23.674 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 93006 ']' 00:26:23.674 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 93006 00:26:23.674 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:26:23.674 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:23.674 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93006 00:26:23.674 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:23.674 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:23.674 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93006' 00:26:23.674 killing process with pid 93006 00:26:23.674 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 93006 00:26:23.675 [2024-05-15 13:44:36.579341] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]addres 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 93006 00:26:23.675 s.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:23.675 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:23.675 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:23.675 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:23.675 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:23.934 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:23.934 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.934 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:23.934 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.934 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:23.934 00:26:23.934 real 0m15.222s 00:26:23.934 user 0m23.628s 00:26:23.934 sys 0m3.263s 00:26:23.934 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:23.934 ************************************ 00:26:23.934 END TEST nvmf_discovery_remove_ifc 00:26:23.934 ************************************ 00:26:23.934 13:44:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:23.934 13:44:36 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:23.934 13:44:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:23.934 13:44:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:23.934 13:44:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:23.934 ************************************ 00:26:23.934 START TEST nvmf_identify_kernel_target 00:26:23.934 ************************************ 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:23.934 * Looking for test storage... 00:26:23.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:23.934 13:44:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.935 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:23.935 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:23.935 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:23.935 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:23.935 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:23.935 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:23.935 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:23.935 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:23.935 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:23.935 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:23.935 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:23.935 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:23.935 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:23.935 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:23.935 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:23.935 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:23.935 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:23.935 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:23.935 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:23.935 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:24.197 Cannot find device "nvmf_tgt_br" 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:24.197 Cannot find device "nvmf_tgt_br2" 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:24.197 Cannot find device "nvmf_tgt_br" 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:24.197 Cannot find device "nvmf_tgt_br2" 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:24.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:24.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:24.197 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:24.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:24.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:26:24.455 00:26:24.455 --- 10.0.0.2 ping statistics --- 00:26:24.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.455 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:24.455 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:24.455 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:26:24.455 00:26:24.455 --- 10.0.0.3 ping statistics --- 00:26:24.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.455 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:24.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:24.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:26:24.455 00:26:24.455 --- 10.0.0.1 ping statistics --- 00:26:24.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.455 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.455 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.456 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.456 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.456 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.456 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.456 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:24.456 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:24.456 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:24.456 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:24.456 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:24.456 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:24.456 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:24.456 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:24.456 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:24.456 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:24.456 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:24.456 13:44:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:24.714 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:24.714 Waiting for block devices as requested 00:26:24.972 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:24.972 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:25.230 No valid GPT data, bailing 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:26:25.230 No valid GPT data, bailing 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:26:25.230 No valid GPT data, bailing 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:26:25.230 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:25.490 No valid GPT data, bailing 00:26:25.490 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:25.490 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:25.491 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:25.491 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:26:25.491 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:26:25.491 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:25.491 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:25.491 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:25.491 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:25.491 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:26:25.491 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:26:25.491 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:26:25.491 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:25.491 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:26:25.491 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:26:25.491 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:26:25.491 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:25.491 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -a 10.0.0.1 -t tcp -s 4420 00:26:25.491 00:26:25.491 Discovery Log Number of Records 2, Generation counter 2 00:26:25.491 =====Discovery Log Entry 0====== 00:26:25.491 trtype: tcp 00:26:25.491 adrfam: ipv4 00:26:25.491 subtype: current discovery subsystem 00:26:25.491 treq: not specified, sq flow control disable supported 00:26:25.491 portid: 1 00:26:25.491 trsvcid: 4420 00:26:25.491 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:25.491 traddr: 10.0.0.1 00:26:25.491 eflags: none 00:26:25.491 sectype: none 00:26:25.491 =====Discovery Log Entry 1====== 00:26:25.491 trtype: tcp 00:26:25.491 adrfam: ipv4 00:26:25.491 subtype: nvme subsystem 00:26:25.491 treq: not specified, sq flow control disable supported 00:26:25.491 portid: 1 00:26:25.491 trsvcid: 4420 00:26:25.491 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:25.491 traddr: 10.0.0.1 00:26:25.491 eflags: none 00:26:25.491 sectype: none 00:26:25.491 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:25.491 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:25.755 ===================================================== 00:26:25.755 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:25.755 ===================================================== 00:26:25.755 Controller Capabilities/Features 00:26:25.755 ================================ 00:26:25.755 Vendor ID: 0000 00:26:25.755 Subsystem Vendor ID: 0000 00:26:25.755 Serial Number: 7f02d2b481bf7b08668d 00:26:25.755 Model Number: Linux 00:26:25.755 Firmware Version: 6.5.12-2 00:26:25.755 Recommended Arb Burst: 0 00:26:25.755 IEEE OUI Identifier: 00 00 00 00:26:25.755 Multi-path I/O 00:26:25.755 May have multiple subsystem ports: No 00:26:25.755 May have multiple controllers: No 00:26:25.755 Associated with SR-IOV VF: No 00:26:25.755 Max Data Transfer Size: Unlimited 00:26:25.755 Max Number of Namespaces: 0 00:26:25.755 Max Number of I/O Queues: 1024 00:26:25.755 NVMe Specification Version (VS): 1.3 00:26:25.755 NVMe Specification Version (Identify): 1.3 00:26:25.755 Maximum Queue Entries: 1024 00:26:25.755 Contiguous Queues Required: No 00:26:25.755 Arbitration Mechanisms Supported 00:26:25.755 Weighted Round Robin: Not Supported 00:26:25.755 Vendor Specific: Not Supported 00:26:25.755 Reset Timeout: 7500 ms 00:26:25.755 Doorbell Stride: 4 bytes 00:26:25.755 NVM Subsystem Reset: Not Supported 00:26:25.755 Command Sets Supported 00:26:25.755 NVM Command Set: Supported 00:26:25.755 Boot Partition: Not Supported 00:26:25.755 Memory Page Size Minimum: 4096 bytes 00:26:25.755 Memory Page Size Maximum: 4096 bytes 00:26:25.755 Persistent Memory Region: Not Supported 00:26:25.755 Optional Asynchronous Events Supported 00:26:25.755 Namespace Attribute Notices: Not Supported 00:26:25.755 Firmware Activation Notices: Not Supported 00:26:25.755 ANA Change Notices: Not Supported 00:26:25.755 PLE Aggregate Log Change Notices: Not Supported 00:26:25.755 LBA Status Info Alert Notices: Not Supported 00:26:25.755 EGE Aggregate Log Change Notices: Not Supported 00:26:25.755 Normal NVM Subsystem Shutdown event: Not Supported 00:26:25.755 Zone Descriptor Change Notices: Not Supported 00:26:25.755 Discovery Log Change Notices: Supported 00:26:25.755 Controller Attributes 00:26:25.755 128-bit Host Identifier: Not Supported 00:26:25.755 Non-Operational Permissive Mode: Not Supported 00:26:25.755 NVM Sets: Not Supported 00:26:25.755 Read Recovery Levels: Not Supported 00:26:25.755 Endurance Groups: Not Supported 00:26:25.755 Predictable Latency Mode: Not Supported 00:26:25.755 Traffic Based Keep ALive: Not Supported 00:26:25.755 Namespace Granularity: Not Supported 00:26:25.755 SQ Associations: Not Supported 00:26:25.755 UUID List: Not Supported 00:26:25.755 Multi-Domain Subsystem: Not Supported 00:26:25.755 Fixed Capacity Management: Not Supported 00:26:25.755 Variable Capacity Management: Not Supported 00:26:25.755 Delete Endurance Group: Not Supported 00:26:25.755 Delete NVM Set: Not Supported 00:26:25.755 Extended LBA Formats Supported: Not Supported 00:26:25.755 Flexible Data Placement Supported: Not Supported 00:26:25.755 00:26:25.755 Controller Memory Buffer Support 00:26:25.755 ================================ 00:26:25.755 Supported: No 00:26:25.755 00:26:25.755 Persistent Memory Region Support 00:26:25.755 ================================ 00:26:25.755 Supported: No 00:26:25.755 00:26:25.755 Admin Command Set Attributes 00:26:25.755 ============================ 00:26:25.755 Security Send/Receive: Not Supported 00:26:25.755 Format NVM: Not Supported 00:26:25.755 Firmware Activate/Download: Not Supported 00:26:25.755 Namespace Management: Not Supported 00:26:25.755 Device Self-Test: Not Supported 00:26:25.755 Directives: Not Supported 00:26:25.755 NVMe-MI: Not Supported 00:26:25.755 Virtualization Management: Not Supported 00:26:25.755 Doorbell Buffer Config: Not Supported 00:26:25.755 Get LBA Status Capability: Not Supported 00:26:25.755 Command & Feature Lockdown Capability: Not Supported 00:26:25.755 Abort Command Limit: 1 00:26:25.755 Async Event Request Limit: 1 00:26:25.755 Number of Firmware Slots: N/A 00:26:25.755 Firmware Slot 1 Read-Only: N/A 00:26:25.755 Firmware Activation Without Reset: N/A 00:26:25.755 Multiple Update Detection Support: N/A 00:26:25.755 Firmware Update Granularity: No Information Provided 00:26:25.755 Per-Namespace SMART Log: No 00:26:25.755 Asymmetric Namespace Access Log Page: Not Supported 00:26:25.755 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:25.755 Command Effects Log Page: Not Supported 00:26:25.755 Get Log Page Extended Data: Supported 00:26:25.756 Telemetry Log Pages: Not Supported 00:26:25.756 Persistent Event Log Pages: Not Supported 00:26:25.756 Supported Log Pages Log Page: May Support 00:26:25.756 Commands Supported & Effects Log Page: Not Supported 00:26:25.756 Feature Identifiers & Effects Log Page:May Support 00:26:25.756 NVMe-MI Commands & Effects Log Page: May Support 00:26:25.756 Data Area 4 for Telemetry Log: Not Supported 00:26:25.756 Error Log Page Entries Supported: 1 00:26:25.756 Keep Alive: Not Supported 00:26:25.756 00:26:25.756 NVM Command Set Attributes 00:26:25.756 ========================== 00:26:25.756 Submission Queue Entry Size 00:26:25.756 Max: 1 00:26:25.756 Min: 1 00:26:25.756 Completion Queue Entry Size 00:26:25.756 Max: 1 00:26:25.756 Min: 1 00:26:25.756 Number of Namespaces: 0 00:26:25.756 Compare Command: Not Supported 00:26:25.756 Write Uncorrectable Command: Not Supported 00:26:25.756 Dataset Management Command: Not Supported 00:26:25.756 Write Zeroes Command: Not Supported 00:26:25.756 Set Features Save Field: Not Supported 00:26:25.756 Reservations: Not Supported 00:26:25.756 Timestamp: Not Supported 00:26:25.756 Copy: Not Supported 00:26:25.756 Volatile Write Cache: Not Present 00:26:25.756 Atomic Write Unit (Normal): 1 00:26:25.756 Atomic Write Unit (PFail): 1 00:26:25.756 Atomic Compare & Write Unit: 1 00:26:25.756 Fused Compare & Write: Not Supported 00:26:25.756 Scatter-Gather List 00:26:25.756 SGL Command Set: Supported 00:26:25.756 SGL Keyed: Not Supported 00:26:25.756 SGL Bit Bucket Descriptor: Not Supported 00:26:25.756 SGL Metadata Pointer: Not Supported 00:26:25.756 Oversized SGL: Not Supported 00:26:25.756 SGL Metadata Address: Not Supported 00:26:25.756 SGL Offset: Supported 00:26:25.756 Transport SGL Data Block: Not Supported 00:26:25.756 Replay Protected Memory Block: Not Supported 00:26:25.756 00:26:25.756 Firmware Slot Information 00:26:25.756 ========================= 00:26:25.756 Active slot: 0 00:26:25.756 00:26:25.756 00:26:25.756 Error Log 00:26:25.756 ========= 00:26:25.756 00:26:25.756 Active Namespaces 00:26:25.756 ================= 00:26:25.756 Discovery Log Page 00:26:25.756 ================== 00:26:25.756 Generation Counter: 2 00:26:25.756 Number of Records: 2 00:26:25.756 Record Format: 0 00:26:25.756 00:26:25.756 Discovery Log Entry 0 00:26:25.756 ---------------------- 00:26:25.756 Transport Type: 3 (TCP) 00:26:25.756 Address Family: 1 (IPv4) 00:26:25.756 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:25.756 Entry Flags: 00:26:25.756 Duplicate Returned Information: 0 00:26:25.756 Explicit Persistent Connection Support for Discovery: 0 00:26:25.756 Transport Requirements: 00:26:25.756 Secure Channel: Not Specified 00:26:25.756 Port ID: 1 (0x0001) 00:26:25.756 Controller ID: 65535 (0xffff) 00:26:25.756 Admin Max SQ Size: 32 00:26:25.756 Transport Service Identifier: 4420 00:26:25.756 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:25.756 Transport Address: 10.0.0.1 00:26:25.756 Discovery Log Entry 1 00:26:25.756 ---------------------- 00:26:25.756 Transport Type: 3 (TCP) 00:26:25.756 Address Family: 1 (IPv4) 00:26:25.756 Subsystem Type: 2 (NVM Subsystem) 00:26:25.756 Entry Flags: 00:26:25.756 Duplicate Returned Information: 0 00:26:25.756 Explicit Persistent Connection Support for Discovery: 0 00:26:25.756 Transport Requirements: 00:26:25.756 Secure Channel: Not Specified 00:26:25.756 Port ID: 1 (0x0001) 00:26:25.756 Controller ID: 65535 (0xffff) 00:26:25.756 Admin Max SQ Size: 32 00:26:25.756 Transport Service Identifier: 4420 00:26:25.756 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:25.756 Transport Address: 10.0.0.1 00:26:25.756 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:25.756 get_feature(0x01) failed 00:26:25.756 get_feature(0x02) failed 00:26:25.756 get_feature(0x04) failed 00:26:25.756 ===================================================== 00:26:25.756 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:25.756 ===================================================== 00:26:25.756 Controller Capabilities/Features 00:26:25.756 ================================ 00:26:25.756 Vendor ID: 0000 00:26:25.756 Subsystem Vendor ID: 0000 00:26:25.756 Serial Number: 3b03d43fdc287e283328 00:26:25.756 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:25.756 Firmware Version: 6.5.12-2 00:26:25.756 Recommended Arb Burst: 6 00:26:25.756 IEEE OUI Identifier: 00 00 00 00:26:25.756 Multi-path I/O 00:26:25.756 May have multiple subsystem ports: Yes 00:26:25.756 May have multiple controllers: Yes 00:26:25.756 Associated with SR-IOV VF: No 00:26:25.756 Max Data Transfer Size: Unlimited 00:26:25.756 Max Number of Namespaces: 1024 00:26:25.756 Max Number of I/O Queues: 128 00:26:25.756 NVMe Specification Version (VS): 1.3 00:26:25.756 NVMe Specification Version (Identify): 1.3 00:26:25.756 Maximum Queue Entries: 1024 00:26:25.756 Contiguous Queues Required: No 00:26:25.756 Arbitration Mechanisms Supported 00:26:25.756 Weighted Round Robin: Not Supported 00:26:25.756 Vendor Specific: Not Supported 00:26:25.756 Reset Timeout: 7500 ms 00:26:25.756 Doorbell Stride: 4 bytes 00:26:25.756 NVM Subsystem Reset: Not Supported 00:26:25.756 Command Sets Supported 00:26:25.756 NVM Command Set: Supported 00:26:25.756 Boot Partition: Not Supported 00:26:25.756 Memory Page Size Minimum: 4096 bytes 00:26:25.756 Memory Page Size Maximum: 4096 bytes 00:26:25.756 Persistent Memory Region: Not Supported 00:26:25.756 Optional Asynchronous Events Supported 00:26:25.756 Namespace Attribute Notices: Supported 00:26:25.756 Firmware Activation Notices: Not Supported 00:26:25.756 ANA Change Notices: Supported 00:26:25.756 PLE Aggregate Log Change Notices: Not Supported 00:26:25.756 LBA Status Info Alert Notices: Not Supported 00:26:25.756 EGE Aggregate Log Change Notices: Not Supported 00:26:25.756 Normal NVM Subsystem Shutdown event: Not Supported 00:26:25.756 Zone Descriptor Change Notices: Not Supported 00:26:25.756 Discovery Log Change Notices: Not Supported 00:26:25.756 Controller Attributes 00:26:25.756 128-bit Host Identifier: Supported 00:26:25.756 Non-Operational Permissive Mode: Not Supported 00:26:25.756 NVM Sets: Not Supported 00:26:25.756 Read Recovery Levels: Not Supported 00:26:25.756 Endurance Groups: Not Supported 00:26:25.756 Predictable Latency Mode: Not Supported 00:26:25.756 Traffic Based Keep ALive: Supported 00:26:25.756 Namespace Granularity: Not Supported 00:26:25.756 SQ Associations: Not Supported 00:26:25.756 UUID List: Not Supported 00:26:25.756 Multi-Domain Subsystem: Not Supported 00:26:25.756 Fixed Capacity Management: Not Supported 00:26:25.756 Variable Capacity Management: Not Supported 00:26:25.756 Delete Endurance Group: Not Supported 00:26:25.756 Delete NVM Set: Not Supported 00:26:25.756 Extended LBA Formats Supported: Not Supported 00:26:25.756 Flexible Data Placement Supported: Not Supported 00:26:25.756 00:26:25.756 Controller Memory Buffer Support 00:26:25.756 ================================ 00:26:25.756 Supported: No 00:26:25.756 00:26:25.756 Persistent Memory Region Support 00:26:25.756 ================================ 00:26:25.756 Supported: No 00:26:25.756 00:26:25.756 Admin Command Set Attributes 00:26:25.756 ============================ 00:26:25.756 Security Send/Receive: Not Supported 00:26:25.756 Format NVM: Not Supported 00:26:25.756 Firmware Activate/Download: Not Supported 00:26:25.756 Namespace Management: Not Supported 00:26:25.756 Device Self-Test: Not Supported 00:26:25.756 Directives: Not Supported 00:26:25.756 NVMe-MI: Not Supported 00:26:25.756 Virtualization Management: Not Supported 00:26:25.756 Doorbell Buffer Config: Not Supported 00:26:25.756 Get LBA Status Capability: Not Supported 00:26:25.756 Command & Feature Lockdown Capability: Not Supported 00:26:25.756 Abort Command Limit: 4 00:26:25.756 Async Event Request Limit: 4 00:26:25.756 Number of Firmware Slots: N/A 00:26:25.756 Firmware Slot 1 Read-Only: N/A 00:26:25.756 Firmware Activation Without Reset: N/A 00:26:25.756 Multiple Update Detection Support: N/A 00:26:25.757 Firmware Update Granularity: No Information Provided 00:26:25.757 Per-Namespace SMART Log: Yes 00:26:25.757 Asymmetric Namespace Access Log Page: Supported 00:26:25.757 ANA Transition Time : 10 sec 00:26:25.757 00:26:25.757 Asymmetric Namespace Access Capabilities 00:26:25.757 ANA Optimized State : Supported 00:26:25.757 ANA Non-Optimized State : Supported 00:26:25.757 ANA Inaccessible State : Supported 00:26:25.757 ANA Persistent Loss State : Supported 00:26:25.757 ANA Change State : Supported 00:26:25.757 ANAGRPID is not changed : No 00:26:25.757 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:25.757 00:26:25.757 ANA Group Identifier Maximum : 128 00:26:25.757 Number of ANA Group Identifiers : 128 00:26:25.757 Max Number of Allowed Namespaces : 1024 00:26:25.757 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:25.757 Command Effects Log Page: Supported 00:26:25.757 Get Log Page Extended Data: Supported 00:26:25.757 Telemetry Log Pages: Not Supported 00:26:25.757 Persistent Event Log Pages: Not Supported 00:26:25.757 Supported Log Pages Log Page: May Support 00:26:25.757 Commands Supported & Effects Log Page: Not Supported 00:26:25.757 Feature Identifiers & Effects Log Page:May Support 00:26:25.757 NVMe-MI Commands & Effects Log Page: May Support 00:26:25.757 Data Area 4 for Telemetry Log: Not Supported 00:26:25.757 Error Log Page Entries Supported: 128 00:26:25.757 Keep Alive: Supported 00:26:25.757 Keep Alive Granularity: 1000 ms 00:26:25.757 00:26:25.757 NVM Command Set Attributes 00:26:25.757 ========================== 00:26:25.757 Submission Queue Entry Size 00:26:25.757 Max: 64 00:26:25.757 Min: 64 00:26:25.757 Completion Queue Entry Size 00:26:25.757 Max: 16 00:26:25.757 Min: 16 00:26:25.757 Number of Namespaces: 1024 00:26:25.757 Compare Command: Not Supported 00:26:25.757 Write Uncorrectable Command: Not Supported 00:26:25.757 Dataset Management Command: Supported 00:26:25.757 Write Zeroes Command: Supported 00:26:25.757 Set Features Save Field: Not Supported 00:26:25.757 Reservations: Not Supported 00:26:25.757 Timestamp: Not Supported 00:26:25.757 Copy: Not Supported 00:26:25.757 Volatile Write Cache: Present 00:26:25.757 Atomic Write Unit (Normal): 1 00:26:25.757 Atomic Write Unit (PFail): 1 00:26:25.757 Atomic Compare & Write Unit: 1 00:26:25.757 Fused Compare & Write: Not Supported 00:26:25.757 Scatter-Gather List 00:26:25.757 SGL Command Set: Supported 00:26:25.757 SGL Keyed: Not Supported 00:26:25.757 SGL Bit Bucket Descriptor: Not Supported 00:26:25.757 SGL Metadata Pointer: Not Supported 00:26:25.757 Oversized SGL: Not Supported 00:26:25.757 SGL Metadata Address: Not Supported 00:26:25.757 SGL Offset: Supported 00:26:25.757 Transport SGL Data Block: Not Supported 00:26:25.757 Replay Protected Memory Block: Not Supported 00:26:25.757 00:26:25.757 Firmware Slot Information 00:26:25.757 ========================= 00:26:25.757 Active slot: 0 00:26:25.757 00:26:25.757 Asymmetric Namespace Access 00:26:25.757 =========================== 00:26:25.757 Change Count : 0 00:26:25.757 Number of ANA Group Descriptors : 1 00:26:25.757 ANA Group Descriptor : 0 00:26:25.757 ANA Group ID : 1 00:26:25.757 Number of NSID Values : 1 00:26:25.757 Change Count : 0 00:26:25.757 ANA State : 1 00:26:25.757 Namespace Identifier : 1 00:26:25.757 00:26:25.757 Commands Supported and Effects 00:26:25.757 ============================== 00:26:25.757 Admin Commands 00:26:25.757 -------------- 00:26:25.757 Get Log Page (02h): Supported 00:26:25.757 Identify (06h): Supported 00:26:25.757 Abort (08h): Supported 00:26:25.757 Set Features (09h): Supported 00:26:25.757 Get Features (0Ah): Supported 00:26:25.757 Asynchronous Event Request (0Ch): Supported 00:26:25.757 Keep Alive (18h): Supported 00:26:25.757 I/O Commands 00:26:25.757 ------------ 00:26:25.757 Flush (00h): Supported 00:26:25.757 Write (01h): Supported LBA-Change 00:26:25.757 Read (02h): Supported 00:26:25.757 Write Zeroes (08h): Supported LBA-Change 00:26:25.757 Dataset Management (09h): Supported 00:26:25.757 00:26:25.757 Error Log 00:26:25.757 ========= 00:26:25.757 Entry: 0 00:26:25.757 Error Count: 0x3 00:26:25.757 Submission Queue Id: 0x0 00:26:25.757 Command Id: 0x5 00:26:25.757 Phase Bit: 0 00:26:25.757 Status Code: 0x2 00:26:25.757 Status Code Type: 0x0 00:26:25.757 Do Not Retry: 1 00:26:25.757 Error Location: 0x28 00:26:25.757 LBA: 0x0 00:26:25.757 Namespace: 0x0 00:26:25.757 Vendor Log Page: 0x0 00:26:25.757 ----------- 00:26:25.757 Entry: 1 00:26:25.757 Error Count: 0x2 00:26:25.757 Submission Queue Id: 0x0 00:26:25.757 Command Id: 0x5 00:26:25.757 Phase Bit: 0 00:26:25.757 Status Code: 0x2 00:26:25.757 Status Code Type: 0x0 00:26:25.757 Do Not Retry: 1 00:26:25.757 Error Location: 0x28 00:26:25.757 LBA: 0x0 00:26:25.757 Namespace: 0x0 00:26:25.757 Vendor Log Page: 0x0 00:26:25.757 ----------- 00:26:25.757 Entry: 2 00:26:25.757 Error Count: 0x1 00:26:25.757 Submission Queue Id: 0x0 00:26:25.757 Command Id: 0x4 00:26:25.757 Phase Bit: 0 00:26:25.757 Status Code: 0x2 00:26:25.757 Status Code Type: 0x0 00:26:25.757 Do Not Retry: 1 00:26:25.757 Error Location: 0x28 00:26:25.757 LBA: 0x0 00:26:25.757 Namespace: 0x0 00:26:25.757 Vendor Log Page: 0x0 00:26:25.757 00:26:25.757 Number of Queues 00:26:25.757 ================ 00:26:25.757 Number of I/O Submission Queues: 128 00:26:25.757 Number of I/O Completion Queues: 128 00:26:25.757 00:26:25.757 ZNS Specific Controller Data 00:26:25.757 ============================ 00:26:25.757 Zone Append Size Limit: 0 00:26:25.757 00:26:25.757 00:26:25.757 Active Namespaces 00:26:25.757 ================= 00:26:25.757 get_feature(0x05) failed 00:26:25.757 Namespace ID:1 00:26:25.757 Command Set Identifier: NVM (00h) 00:26:25.757 Deallocate: Supported 00:26:25.757 Deallocated/Unwritten Error: Not Supported 00:26:25.757 Deallocated Read Value: Unknown 00:26:25.757 Deallocate in Write Zeroes: Not Supported 00:26:25.757 Deallocated Guard Field: 0xFFFF 00:26:25.757 Flush: Supported 00:26:25.757 Reservation: Not Supported 00:26:25.757 Namespace Sharing Capabilities: Multiple Controllers 00:26:25.757 Size (in LBAs): 1310720 (5GiB) 00:26:25.757 Capacity (in LBAs): 1310720 (5GiB) 00:26:25.757 Utilization (in LBAs): 1310720 (5GiB) 00:26:25.757 UUID: c7ddfa6d-01b5-41ec-b620-39628caa1844 00:26:25.757 Thin Provisioning: Not Supported 00:26:25.757 Per-NS Atomic Units: Yes 00:26:25.757 Atomic Boundary Size (Normal): 0 00:26:25.757 Atomic Boundary Size (PFail): 0 00:26:25.757 Atomic Boundary Offset: 0 00:26:25.757 NGUID/EUI64 Never Reused: No 00:26:25.757 ANA group ID: 1 00:26:25.757 Namespace Write Protected: No 00:26:25.757 Number of LBA Formats: 1 00:26:25.757 Current LBA Format: LBA Format #00 00:26:25.758 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:26:25.758 00:26:25.758 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:25.758 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:25.758 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:26.022 rmmod nvme_tcp 00:26:26.022 rmmod nvme_fabrics 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:26.022 13:44:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:26.022 13:44:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:26.992 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:26.992 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:26:26.992 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:26:26.992 ************************************ 00:26:26.992 END TEST nvmf_identify_kernel_target 00:26:26.992 ************************************ 00:26:26.992 00:26:26.992 real 0m3.082s 00:26:26.992 user 0m1.010s 00:26:26.992 sys 0m1.534s 00:26:26.992 13:44:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:26.992 13:44:39 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:26.992 13:44:40 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:26.992 13:44:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:26.992 13:44:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:26.992 13:44:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:26.992 ************************************ 00:26:26.992 START TEST nvmf_auth_host 00:26:26.992 ************************************ 00:26:26.992 13:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:27.291 * Looking for test storage... 00:26:27.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.291 13:44:40 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:27.292 Cannot find device "nvmf_tgt_br" 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:27.292 Cannot find device "nvmf_tgt_br2" 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:27.292 Cannot find device "nvmf_tgt_br" 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:27.292 Cannot find device "nvmf_tgt_br2" 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:27.292 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:27.292 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:27.292 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:27.589 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:27.589 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:26:27.589 00:26:27.589 --- 10.0.0.2 ping statistics --- 00:26:27.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.589 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:27.589 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:27.589 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:26:27.589 00:26:27.589 --- 10.0.0.3 ping statistics --- 00:26:27.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.589 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:27.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:27.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:26:27.589 00:26:27.589 --- 10.0.0.1 ping statistics --- 00:26:27.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.589 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=93919 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 93919 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 93919 ']' 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:27.589 13:44:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=088459396692922770907f40e81bbdce 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.iHk 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 088459396692922770907f40e81bbdce 0 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 088459396692922770907f40e81bbdce 0 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=088459396692922770907f40e81bbdce 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.iHk 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.iHk 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.iHk 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9bd7aaa07408ab25dde5e0cb9b7840de5088432103ec07e8abdacb3e4e7750b2 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.hoj 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9bd7aaa07408ab25dde5e0cb9b7840de5088432103ec07e8abdacb3e4e7750b2 3 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9bd7aaa07408ab25dde5e0cb9b7840de5088432103ec07e8abdacb3e4e7750b2 3 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9bd7aaa07408ab25dde5e0cb9b7840de5088432103ec07e8abdacb3e4e7750b2 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.hoj 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.hoj 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.hoj 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=82efbf78b2fb1bc088edcaba9e477b40a0e8825726a1adf7 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:28.964 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.mjZ 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 82efbf78b2fb1bc088edcaba9e477b40a0e8825726a1adf7 0 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 82efbf78b2fb1bc088edcaba9e477b40a0e8825726a1adf7 0 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=82efbf78b2fb1bc088edcaba9e477b40a0e8825726a1adf7 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.mjZ 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.mjZ 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.mjZ 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c1456b043bd2eb92f607e08707d94516d4d4315bf9d550c5 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.gwp 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c1456b043bd2eb92f607e08707d94516d4d4315bf9d550c5 2 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c1456b043bd2eb92f607e08707d94516d4d4315bf9d550c5 2 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c1456b043bd2eb92f607e08707d94516d4d4315bf9d550c5 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.gwp 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.gwp 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.gwp 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6cd373257ab6ddfb01dfa87afa3650ff 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.2ff 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6cd373257ab6ddfb01dfa87afa3650ff 1 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6cd373257ab6ddfb01dfa87afa3650ff 1 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6cd373257ab6ddfb01dfa87afa3650ff 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:28.965 13:44:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.2ff 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.2ff 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.2ff 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e5e0b424c6e9b1965e858c2572b5d132 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.zFl 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e5e0b424c6e9b1965e858c2572b5d132 1 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e5e0b424c6e9b1965e858c2572b5d132 1 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e5e0b424c6e9b1965e858c2572b5d132 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.zFl 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.zFl 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.zFl 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:28.965 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c58776886f6654e3d406d8691284782a81e9788ae8f2671b 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Qrm 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c58776886f6654e3d406d8691284782a81e9788ae8f2671b 2 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c58776886f6654e3d406d8691284782a81e9788ae8f2671b 2 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c58776886f6654e3d406d8691284782a81e9788ae8f2671b 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Qrm 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Qrm 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Qrm 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=298655e5a74320e271e6c65c64aa991a 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.67i 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 298655e5a74320e271e6c65c64aa991a 0 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 298655e5a74320e271e6c65c64aa991a 0 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=298655e5a74320e271e6c65c64aa991a 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.67i 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.67i 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.67i 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3f95861d7961bc005487bd762e5234168ec22c762bf9e67f449e7a0631cb29bc 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.pMt 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3f95861d7961bc005487bd762e5234168ec22c762bf9e67f449e7a0631cb29bc 3 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3f95861d7961bc005487bd762e5234168ec22c762bf9e67f449e7a0631cb29bc 3 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3f95861d7961bc005487bd762e5234168ec22c762bf9e67f449e7a0631cb29bc 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.pMt 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.pMt 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.pMt 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 93919 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 93919 ']' 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:29.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:29.223 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.480 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:29.480 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:26:29.481 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:29.481 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.iHk 00:26:29.481 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.481 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.hoj ]] 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hoj 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.mjZ 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.gwp ]] 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gwp 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.2ff 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.zFl ]] 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zFl 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Qrm 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.67i ]] 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.67i 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.pMt 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:29.768 13:44:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:30.026 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:30.026 Waiting for block devices as requested 00:26:30.284 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:30.284 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:31.239 13:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:31.239 13:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:31.239 13:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:31.239 13:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:26:31.239 13:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:31.239 13:44:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:31.239 13:44:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:31.239 13:44:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:31.239 13:44:43 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:31.239 No valid GPT data, bailing 00:26:31.239 13:44:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:31.239 13:44:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:31.239 13:44:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:31.239 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:31.239 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:31.239 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:26:31.239 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:26:31.239 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:26:31.239 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:26:31.239 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:31.239 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:26:31.239 13:44:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:26:31.239 13:44:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:26:31.239 No valid GPT data, bailing 00:26:31.239 13:44:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:26:31.239 13:44:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:26:31.240 No valid GPT data, bailing 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:31.240 No valid GPT data, bailing 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:31.240 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:31.500 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:31.500 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:31.500 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:26:31.500 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:26:31.500 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:26:31.500 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:31.500 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:26:31.500 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:26:31.500 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:26:31.500 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:31.500 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -a 10.0.0.1 -t tcp -s 4420 00:26:31.500 00:26:31.500 Discovery Log Number of Records 2, Generation counter 2 00:26:31.500 =====Discovery Log Entry 0====== 00:26:31.500 trtype: tcp 00:26:31.500 adrfam: ipv4 00:26:31.500 subtype: current discovery subsystem 00:26:31.500 treq: not specified, sq flow control disable supported 00:26:31.500 portid: 1 00:26:31.500 trsvcid: 4420 00:26:31.500 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:31.500 traddr: 10.0.0.1 00:26:31.500 eflags: none 00:26:31.500 sectype: none 00:26:31.500 =====Discovery Log Entry 1====== 00:26:31.500 trtype: tcp 00:26:31.500 adrfam: ipv4 00:26:31.500 subtype: nvme subsystem 00:26:31.500 treq: not specified, sq flow control disable supported 00:26:31.500 portid: 1 00:26:31.500 trsvcid: 4420 00:26:31.500 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:31.500 traddr: 10.0.0.1 00:26:31.500 eflags: none 00:26:31.500 sectype: none 00:26:31.500 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:31.500 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:31.500 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: ]] 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.501 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.765 nvme0n1 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: ]] 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.765 nvme0n1 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.765 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: ]] 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.027 nvme0n1 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.027 13:44:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: ]] 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.027 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.028 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.028 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.028 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.028 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.028 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.028 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.028 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.028 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.028 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.028 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.028 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.028 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:32.028 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.028 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.286 nvme0n1 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: ]] 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.286 nvme0n1 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.286 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.544 nvme0n1 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:32.544 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:32.802 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:32.802 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: ]] 00:26:32.802 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:32.802 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:32.802 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.802 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:32.802 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:32.802 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:32.802 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.802 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:32.802 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.802 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.802 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.802 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.802 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.802 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.802 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.802 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.802 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.802 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:32.802 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.802 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:32.802 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:33.060 13:44:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:33.060 13:44:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:33.060 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.060 13:44:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.060 nvme0n1 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: ]] 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.060 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.328 nvme0n1 00:26:33.328 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.328 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.328 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.328 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.328 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.328 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.328 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.328 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.328 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.328 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: ]] 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.329 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.593 nvme0n1 00:26:33.593 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.593 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.593 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.593 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.593 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.593 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.593 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.593 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.593 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.593 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.593 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.593 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.593 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: ]] 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.594 nvme0n1 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.594 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.852 nvme0n1 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.852 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.110 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.110 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.110 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.110 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.110 13:44:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.110 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:34.110 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.110 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:34.110 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.110 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.110 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:34.110 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:34.110 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:34.110 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:34.110 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.110 13:44:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: ]] 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.697 nvme0n1 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:34.697 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: ]] 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.698 13:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.959 nvme0n1 00:26:34.959 13:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.959 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.959 13:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.959 13:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.959 13:44:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.959 13:44:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.959 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.959 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.959 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.959 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.959 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.959 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.959 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:34.959 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.959 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:34.959 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:34.959 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:34.959 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:34.959 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:34.959 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:34.959 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:34.959 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:34.959 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: ]] 00:26:34.959 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:34.959 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:34.959 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.959 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:34.959 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:34.959 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:34.959 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.960 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:34.960 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.960 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.960 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.960 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.960 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.960 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.960 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.960 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.960 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.960 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:34.960 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.960 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:34.960 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:34.960 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:34.960 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:34.960 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.960 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.218 nvme0n1 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: ]] 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.218 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.219 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.219 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.219 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.219 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:35.219 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.219 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:35.219 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:35.219 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:35.219 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:35.219 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.219 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.477 nvme0n1 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:35.477 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:35.738 13:44:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:35.738 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:35.738 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.738 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.738 nvme0n1 00:26:35.738 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.738 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.738 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.738 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.738 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.738 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.738 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.738 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.738 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.738 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.738 13:44:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.738 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:35.738 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.738 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:35.738 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.738 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.738 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:35.738 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:35.738 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:35.738 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:35.739 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.739 13:44:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:37.641 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:37.641 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: ]] 00:26:37.641 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:37.641 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:37.641 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.641 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.641 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:37.641 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:37.641 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.641 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:37.641 13:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.641 13:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.641 13:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.641 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.641 13:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:37.641 13:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.641 13:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.641 13:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.641 13:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.641 13:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:37.641 13:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.641 13:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:37.641 13:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:37.642 13:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:37.642 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:37.642 13:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.642 13:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.901 nvme0n1 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: ]] 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.901 13:44:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.469 nvme0n1 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: ]] 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.469 13:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.727 nvme0n1 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: ]] 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.727 13:44:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.294 nvme0n1 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.294 13:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.552 nvme0n1 00:26:39.552 13:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.552 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.552 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.552 13:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.552 13:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.552 13:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.552 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.552 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.552 13:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.552 13:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: ]] 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.816 13:44:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.383 nvme0n1 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: ]] 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.383 13:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.948 nvme0n1 00:26:40.948 13:44:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.948 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.948 13:44:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.948 13:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.948 13:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.948 13:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.948 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.948 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.948 13:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.948 13:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: ]] 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.206 13:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.772 nvme0n1 00:26:41.772 13:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.772 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.772 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.772 13:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.772 13:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.772 13:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.772 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.772 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.772 13:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.772 13:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.772 13:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.772 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.772 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:41.772 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.772 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:41.772 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:41.772 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:41.772 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:41.772 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: ]] 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.773 13:44:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.757 nvme0n1 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.757 13:44:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.339 nvme0n1 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: ]] 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.339 nvme0n1 00:26:43.339 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.340 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.340 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.340 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.340 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.340 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: ]] 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:43.598 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.599 nvme0n1 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: ]] 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.599 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.856 nvme0n1 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: ]] 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.856 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:43.857 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:43.857 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:43.857 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.857 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:43.857 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.857 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.857 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.857 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.857 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:43.857 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:43.857 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:43.857 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.857 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.857 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:43.857 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.857 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:43.857 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:43.857 13:44:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:43.857 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:43.857 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.857 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.114 nvme0n1 00:26:44.114 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.114 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.114 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.114 13:44:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.114 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.114 13:44:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:44.114 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.115 nvme0n1 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: ]] 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:44.115 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.372 nvme0n1 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:44.372 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: ]] 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.373 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.631 nvme0n1 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: ]] 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.631 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.889 nvme0n1 00:26:44.889 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.889 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.889 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.889 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.889 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.889 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.889 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.889 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.889 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.889 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.889 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.889 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.889 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:44.889 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.889 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:44.889 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:44.889 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:44.889 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:44.889 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:44.889 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:44.889 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:44.889 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: ]] 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.890 13:44:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.148 nvme0n1 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.148 nvme0n1 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.148 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: ]] 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.406 nvme0n1 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.406 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: ]] 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.664 nvme0n1 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.664 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: ]] 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.922 13:44:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.922 nvme0n1 00:26:45.922 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.922 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.922 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.922 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.922 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: ]] 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.181 nvme0n1 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.181 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.439 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.695 nvme0n1 00:26:46.695 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.695 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.695 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.695 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.695 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.695 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.695 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.695 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.695 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.695 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.695 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: ]] 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.696 13:44:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.952 nvme0n1 00:26:46.952 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.952 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.952 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.952 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.952 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.952 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.210 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.210 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.210 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.210 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.210 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.210 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.210 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:47.210 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.210 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.210 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:47.210 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:47.210 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:47.210 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:47.210 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.210 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:47.210 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:47.210 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: ]] 00:26:47.210 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:47.210 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:47.210 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.210 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.210 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:47.211 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:47.211 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.211 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:47.211 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.211 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.211 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.211 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.211 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:47.211 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:47.211 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:47.211 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.211 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.211 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:47.211 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.211 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:47.211 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:47.211 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:47.211 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:47.211 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.211 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.468 nvme0n1 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: ]] 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.468 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.041 nvme0n1 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: ]] 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.041 13:45:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.300 nvme0n1 00:26:48.300 13:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.300 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.300 13:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.300 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.300 13:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.300 13:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.300 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.300 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.300 13:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.300 13:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.558 13:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.558 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.558 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:48.558 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.558 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.559 13:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.816 nvme0n1 00:26:48.816 13:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.816 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.816 13:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.816 13:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: ]] 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.817 13:45:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.381 nvme0n1 00:26:49.381 13:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.381 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.381 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.381 13:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.381 13:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.381 13:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: ]] 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.639 13:45:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.207 nvme0n1 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: ]] 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.207 13:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.778 nvme0n1 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: ]] 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.778 13:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.037 13:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.037 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.037 13:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:51.037 13:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:51.037 13:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:51.037 13:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.037 13:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.037 13:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:51.037 13:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.037 13:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:51.037 13:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:51.037 13:45:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:51.037 13:45:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:51.037 13:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.037 13:45:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.602 nvme0n1 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.602 13:45:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.168 nvme0n1 00:26:52.168 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.168 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.168 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.168 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.168 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.168 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: ]] 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.426 nvme0n1 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: ]] 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.426 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.748 nvme0n1 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: ]] 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.748 nvme0n1 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:52.748 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: ]] 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.005 nvme0n1 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.005 13:45:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.006 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.006 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.006 13:45:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.006 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.263 nvme0n1 00:26:53.263 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.263 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.263 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.263 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: ]] 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.264 nvme0n1 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.264 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: ]] 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.522 nvme0n1 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:53.522 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: ]] 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.781 nvme0n1 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: ]] 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.781 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.039 nvme0n1 00:26:54.039 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.039 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.039 13:45:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.039 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.039 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.039 13:45:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:54.039 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:54.040 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:54.040 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:54.040 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.040 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.297 nvme0n1 00:26:54.297 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.297 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.297 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.297 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.297 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.297 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.297 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.297 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.297 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.297 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.297 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.297 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:54.297 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.297 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:54.297 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.297 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.297 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:54.297 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:54.297 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: ]] 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.298 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.555 nvme0n1 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: ]] 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:54.555 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:54.556 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:54.556 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.556 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.813 nvme0n1 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: ]] 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.813 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.814 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.814 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:54.814 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:54.814 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:54.814 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.814 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.814 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:54.814 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.814 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:54.814 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:54.814 13:45:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:54.814 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:54.814 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.814 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.071 nvme0n1 00:26:55.071 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.071 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.071 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.071 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.071 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.071 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.071 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.071 13:45:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.071 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.071 13:45:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: ]] 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.071 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.329 nvme0n1 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.329 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.588 nvme0n1 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: ]] 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.588 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.846 nvme0n1 00:26:55.846 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.846 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.846 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.846 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.846 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.846 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.846 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.846 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.846 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.846 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.846 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.846 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.846 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:55.846 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: ]] 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.103 13:45:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.362 nvme0n1 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: ]] 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.362 13:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.924 nvme0n1 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: ]] 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:56.924 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.925 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.925 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:56.925 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:56.925 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.925 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:56.925 13:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.925 13:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.925 13:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.925 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.925 13:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:56.925 13:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:56.925 13:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:56.925 13:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.925 13:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.925 13:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:56.925 13:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.925 13:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:56.925 13:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:56.925 13:45:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:56.925 13:45:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:56.925 13:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.925 13:45:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.182 nvme0n1 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.182 13:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.748 nvme0n1 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDg4NDU5Mzk2NjkyOTIyNzcwOTA3ZjQwZTgxYmJkY2X0Ui9P: 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: ]] 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWJkN2FhYTA3NDA4YWIyNWRkZTVlMGNiOWI3ODQwZGU1MDg4NDMyMTAzZWMwN2U4YWJkYWNiM2U0ZTc3NTBiMvO9WIg=: 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.748 13:45:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.314 nvme0n1 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: ]] 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.314 13:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:58.315 13:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.315 13:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:58.315 13:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:58.315 13:45:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:58.315 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:58.315 13:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.315 13:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.881 nvme0n1 00:26:58.881 13:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.881 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.881 13:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.881 13:45:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.881 13:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.881 13:45:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.139 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.139 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.139 13:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.139 13:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.139 13:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.139 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.139 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmNkMzczMjU3YWI2ZGRmYjAxZGZhODdhZmEzNjUwZmaA5egU: 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: ]] 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTVlMGI0MjRjNmU5YjE5NjVlODU4YzI1NzJiNWQxMzJUJPof: 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.140 13:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.704 nvme0n1 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzU4Nzc2ODg2ZjY2NTRlM2Q0MDZkODY5MTI4NDc4MmE4MWU5Nzg4YWU4ZjI2NzFiSqKbxA==: 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: ]] 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk4NjU1ZTVhNzQzMjBlMjcxZTZjNjVjNjRhYTk5MWHbCxSU: 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.704 13:45:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.269 nvme0n1 00:27:00.269 13:45:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5NTg2MWQ3OTYxYmMwMDU0ODdiZDc2MmU1MjM0MTY4ZWMyMmM3NjJiZjllNjdmNDQ5ZTdhMDYzMWNiMjliY03eI4k=: 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.527 13:45:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.092 nvme0n1 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODJlZmJmNzhiMmZiMWJjMDg4ZWRjYWJhOWU0NzdiNDBhMGU4ODI1NzI2YTFhZGY3v7Srtg==: 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: ]] 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzE0NTZiMDQzYmQyZWI5MmY2MDdlMDg3MDdkOTQ1MTZkNGQ0MzE1YmY5ZDU1MGM1owgzkg==: 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:01.092 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:01.093 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:01.093 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:01.093 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:01.093 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:01.093 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:01.093 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:01.093 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.093 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.093 request: 00:27:01.093 { 00:27:01.093 "name": "nvme0", 00:27:01.093 "trtype": "tcp", 00:27:01.093 "traddr": "10.0.0.1", 00:27:01.093 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:01.093 "adrfam": "ipv4", 00:27:01.093 "trsvcid": "4420", 00:27:01.093 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:01.093 "method": "bdev_nvme_attach_controller", 00:27:01.093 "req_id": 1 00:27:01.093 } 00:27:01.093 Got JSON-RPC error response 00:27:01.093 response: 00:27:01.093 { 00:27:01.093 "code": -32602, 00:27:01.093 "message": "Invalid parameters" 00:27:01.093 } 00:27:01.093 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:01.093 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:01.093 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:01.093 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:01.093 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:01.093 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.093 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.093 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:01.093 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.093 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.351 request: 00:27:01.351 { 00:27:01.351 "name": "nvme0", 00:27:01.351 "trtype": "tcp", 00:27:01.351 "traddr": "10.0.0.1", 00:27:01.351 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:01.351 "adrfam": "ipv4", 00:27:01.351 "trsvcid": "4420", 00:27:01.351 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:01.351 "dhchap_key": "key2", 00:27:01.351 "method": "bdev_nvme_attach_controller", 00:27:01.351 "req_id": 1 00:27:01.351 } 00:27:01.351 Got JSON-RPC error response 00:27:01.351 response: 00:27:01.351 { 00:27:01.351 "code": -32602, 00:27:01.351 "message": "Invalid parameters" 00:27:01.351 } 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:01.351 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.352 request: 00:27:01.352 { 00:27:01.352 "name": "nvme0", 00:27:01.352 "trtype": "tcp", 00:27:01.352 "traddr": "10.0.0.1", 00:27:01.352 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:01.352 "adrfam": "ipv4", 00:27:01.352 "trsvcid": "4420", 00:27:01.352 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:01.352 "dhchap_key": "key1", 00:27:01.352 "dhchap_ctrlr_key": "ckey2", 00:27:01.352 "method": "bdev_nvme_attach_controller", 00:27:01.352 "req_id": 1 00:27:01.352 } 00:27:01.352 Got JSON-RPC error response 00:27:01.352 response: 00:27:01.352 { 00:27:01.352 "code": -32602, 00:27:01.352 "message": "Invalid parameters" 00:27:01.352 } 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:01.352 rmmod nvme_tcp 00:27:01.352 rmmod nvme_fabrics 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 93919 ']' 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 93919 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 93919 ']' 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 93919 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93919 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:01.352 killing process with pid 93919 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93919' 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 93919 00:27:01.352 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 93919 00:27:01.610 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:01.610 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:01.610 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:01.610 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:01.610 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:01.610 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.610 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:01.610 13:45:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.610 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:01.610 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:01.610 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:01.610 13:45:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:01.610 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:01.610 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:27:01.610 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:01.610 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:01.610 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:01.610 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:01.610 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:01.610 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:01.610 13:45:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:02.229 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:02.487 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:02.487 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:02.487 13:45:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.iHk /tmp/spdk.key-null.mjZ /tmp/spdk.key-sha256.2ff /tmp/spdk.key-sha384.Qrm /tmp/spdk.key-sha512.pMt /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:27:02.487 13:45:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:02.745 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:03.004 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:03.004 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:03.004 00:27:03.004 real 0m35.896s 00:27:03.004 user 0m31.828s 00:27:03.004 sys 0m4.182s 00:27:03.004 13:45:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:03.004 13:45:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.004 ************************************ 00:27:03.004 END TEST nvmf_auth_host 00:27:03.004 ************************************ 00:27:03.004 13:45:15 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:27:03.004 13:45:15 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:03.004 13:45:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:03.004 13:45:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:03.004 13:45:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:03.004 ************************************ 00:27:03.004 START TEST nvmf_digest 00:27:03.004 ************************************ 00:27:03.004 13:45:15 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:03.004 * Looking for test storage... 00:27:03.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:03.004 13:45:16 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:03.005 Cannot find device "nvmf_tgt_br" 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:03.005 Cannot find device "nvmf_tgt_br2" 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:03.005 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:03.262 Cannot find device "nvmf_tgt_br" 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:03.262 Cannot find device "nvmf_tgt_br2" 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:03.262 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:03.262 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:03.262 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:03.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:03.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:27:03.520 00:27:03.520 --- 10.0.0.2 ping statistics --- 00:27:03.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.520 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:03.520 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:03.520 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:27:03.520 00:27:03.520 --- 10.0.0.3 ping statistics --- 00:27:03.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.520 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:03.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:03.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:27:03.520 00:27:03.520 --- 10.0.0.1 ping statistics --- 00:27:03.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.520 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:03.520 ************************************ 00:27:03.520 START TEST nvmf_digest_clean 00:27:03.520 ************************************ 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=95489 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 95489 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 95489 ']' 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:03.520 13:45:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:03.520 [2024-05-15 13:45:16.479161] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:27:03.520 [2024-05-15 13:45:16.479255] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:03.520 [2024-05-15 13:45:16.605587] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:03.520 [2024-05-15 13:45:16.619876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.779 [2024-05-15 13:45:16.674879] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:03.779 [2024-05-15 13:45:16.674926] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:03.779 [2024-05-15 13:45:16.674938] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:03.779 [2024-05-15 13:45:16.674948] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:03.779 [2024-05-15 13:45:16.674956] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:03.779 [2024-05-15 13:45:16.674989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:04.713 null0 00:27:04.713 [2024-05-15 13:45:17.597700] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:04.713 [2024-05-15 13:45:17.621622] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:04.713 [2024-05-15 13:45:17.621894] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95521 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95521 /var/tmp/bperf.sock 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 95521 ']' 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:04.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:04.713 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:04.713 [2024-05-15 13:45:17.676738] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:27:04.713 [2024-05-15 13:45:17.676834] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95521 ] 00:27:04.713 [2024-05-15 13:45:17.804982] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:04.971 [2024-05-15 13:45:17.823359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.971 [2024-05-15 13:45:17.881096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.971 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:04.971 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:27:04.971 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:04.971 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:04.971 13:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:05.230 13:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:05.230 13:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:05.808 nvme0n1 00:27:05.808 13:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:05.808 13:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:05.808 Running I/O for 2 seconds... 00:27:07.707 00:27:07.707 Latency(us) 00:27:07.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:07.707 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:07.707 nvme0n1 : 2.01 14927.83 58.31 0.00 0.00 8567.07 7489.83 19723.22 00:27:07.707 =================================================================================================================== 00:27:07.707 Total : 14927.83 58.31 0.00 0.00 8567.07 7489.83 19723.22 00:27:07.707 0 00:27:07.707 13:45:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:07.707 13:45:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:07.707 13:45:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:07.707 13:45:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:07.707 13:45:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:07.707 | select(.opcode=="crc32c") 00:27:07.707 | "\(.module_name) \(.executed)"' 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95521 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 95521 ']' 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 95521 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95521 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:08.273 killing process with pid 95521 00:27:08.273 Received shutdown signal, test time was about 2.000000 seconds 00:27:08.273 00:27:08.273 Latency(us) 00:27:08.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.273 =================================================================================================================== 00:27:08.273 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95521' 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 95521 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 95521 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95574 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95574 /var/tmp/bperf.sock 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 95574 ']' 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:08.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:08.273 13:45:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:08.273 [2024-05-15 13:45:21.352132] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:27:08.273 [2024-05-15 13:45:21.353034] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95574 ] 00:27:08.273 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:08.273 Zero copy mechanism will not be used. 00:27:08.531 [2024-05-15 13:45:21.479990] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:08.531 [2024-05-15 13:45:21.499831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.531 [2024-05-15 13:45:21.560961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.466 13:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:09.466 13:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:27:09.466 13:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:09.466 13:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:09.466 13:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:09.724 13:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:09.724 13:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:09.981 nvme0n1 00:27:09.981 13:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:09.981 13:45:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:10.239 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:10.239 Zero copy mechanism will not be used. 00:27:10.239 Running I/O for 2 seconds... 00:27:12.151 00:27:12.151 Latency(us) 00:27:12.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.151 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:12.151 nvme0n1 : 2.00 7876.31 984.54 0.00 0.00 2027.92 1856.85 2964.72 00:27:12.151 =================================================================================================================== 00:27:12.151 Total : 7876.31 984.54 0.00 0.00 2027.92 1856.85 2964.72 00:27:12.151 0 00:27:12.151 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:12.151 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:12.151 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:12.151 | select(.opcode=="crc32c") 00:27:12.151 | "\(.module_name) \(.executed)"' 00:27:12.151 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:12.151 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:12.410 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:12.410 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:12.410 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:12.410 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:12.410 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95574 00:27:12.410 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 95574 ']' 00:27:12.410 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 95574 00:27:12.410 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:12.410 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:12.410 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95574 00:27:12.410 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:12.410 killing process with pid 95574 00:27:12.410 Received shutdown signal, test time was about 2.000000 seconds 00:27:12.410 00:27:12.410 Latency(us) 00:27:12.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.410 =================================================================================================================== 00:27:12.410 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:12.410 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:12.410 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95574' 00:27:12.410 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 95574 00:27:12.410 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 95574 00:27:12.669 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:12.669 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:12.669 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:12.669 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:12.669 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:12.669 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:12.669 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:12.669 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95634 00:27:12.669 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95634 /var/tmp/bperf.sock 00:27:12.669 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 95634 ']' 00:27:12.669 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:12.669 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:12.669 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:12.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:12.669 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:12.669 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:12.669 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:12.669 [2024-05-15 13:45:25.690012] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:27:12.669 [2024-05-15 13:45:25.690163] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95634 ] 00:27:12.928 [2024-05-15 13:45:25.821824] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:12.928 [2024-05-15 13:45:25.839098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.928 [2024-05-15 13:45:25.900201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.928 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:12.928 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:27:12.928 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:12.928 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:12.928 13:45:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:13.496 13:45:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:13.496 13:45:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:13.754 nvme0n1 00:27:13.754 13:45:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:13.754 13:45:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:13.754 Running I/O for 2 seconds... 00:27:16.284 00:27:16.284 Latency(us) 00:27:16.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.284 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:16.284 nvme0n1 : 2.01 17226.59 67.29 0.00 0.00 7424.25 6241.52 17850.76 00:27:16.284 =================================================================================================================== 00:27:16.284 Total : 17226.59 67.29 0.00 0.00 7424.25 6241.52 17850.76 00:27:16.284 0 00:27:16.284 13:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:16.284 13:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:16.284 13:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:16.284 13:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:16.284 13:45:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:16.284 | select(.opcode=="crc32c") 00:27:16.284 | "\(.module_name) \(.executed)"' 00:27:16.284 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:16.284 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:16.284 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:16.284 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:16.284 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95634 00:27:16.284 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 95634 ']' 00:27:16.284 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 95634 00:27:16.284 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:16.284 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:16.284 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95634 00:27:16.284 killing process with pid 95634 00:27:16.284 Received shutdown signal, test time was about 2.000000 seconds 00:27:16.284 00:27:16.284 Latency(us) 00:27:16.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.284 =================================================================================================================== 00:27:16.284 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:16.284 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:16.284 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:16.284 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95634' 00:27:16.284 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 95634 00:27:16.284 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 95634 00:27:16.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:16.543 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:16.543 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:16.543 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:16.543 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:16.543 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:16.543 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:16.543 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:16.543 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95687 00:27:16.543 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:16.543 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95687 /var/tmp/bperf.sock 00:27:16.543 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 95687 ']' 00:27:16.543 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:16.543 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:16.543 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:16.543 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:16.543 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:16.543 [2024-05-15 13:45:29.446893] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:27:16.543 [2024-05-15 13:45:29.447295] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95687 ] 00:27:16.543 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:16.543 Zero copy mechanism will not be used. 00:27:16.543 [2024-05-15 13:45:29.578456] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:16.543 [2024-05-15 13:45:29.597899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.801 [2024-05-15 13:45:29.656596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.801 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:16.801 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:27:16.801 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:16.801 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:16.801 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:17.059 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:17.059 13:45:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:17.317 nvme0n1 00:27:17.317 13:45:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:17.317 13:45:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:17.317 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:17.317 Zero copy mechanism will not be used. 00:27:17.317 Running I/O for 2 seconds... 00:27:19.843 00:27:19.843 Latency(us) 00:27:19.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.843 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:19.843 nvme0n1 : 2.00 7718.12 964.76 0.00 0.00 2068.52 1427.75 3869.74 00:27:19.843 =================================================================================================================== 00:27:19.843 Total : 7718.12 964.76 0.00 0.00 2068.52 1427.75 3869.74 00:27:19.843 0 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:19.843 | select(.opcode=="crc32c") 00:27:19.843 | "\(.module_name) \(.executed)"' 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95687 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 95687 ']' 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 95687 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95687 00:27:19.843 killing process with pid 95687 00:27:19.843 Received shutdown signal, test time was about 2.000000 seconds 00:27:19.843 00:27:19.843 Latency(us) 00:27:19.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.843 =================================================================================================================== 00:27:19.843 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95687' 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 95687 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 95687 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 95489 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 95489 ']' 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 95489 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95489 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95489' 00:27:19.843 killing process with pid 95489 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 95489 00:27:19.843 [2024-05-15 13:45:32.931079] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:19.843 13:45:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 95489 00:27:20.101 00:27:20.101 real 0m16.695s 00:27:20.101 user 0m31.416s 00:27:20.101 sys 0m5.276s 00:27:20.101 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:20.101 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:20.101 ************************************ 00:27:20.101 END TEST nvmf_digest_clean 00:27:20.101 ************************************ 00:27:20.101 13:45:33 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:20.101 13:45:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:20.101 13:45:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:20.101 13:45:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:20.101 ************************************ 00:27:20.101 START TEST nvmf_digest_error 00:27:20.101 ************************************ 00:27:20.101 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:27:20.101 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:20.101 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:20.101 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:20.101 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:20.101 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:20.101 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=95763 00:27:20.101 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 95763 00:27:20.101 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 95763 ']' 00:27:20.101 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.101 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:20.101 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.101 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:20.101 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:20.359 [2024-05-15 13:45:33.241826] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:27:20.359 [2024-05-15 13:45:33.241931] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:20.359 [2024-05-15 13:45:33.370886] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:20.359 [2024-05-15 13:45:33.391784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.359 [2024-05-15 13:45:33.448350] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:20.359 [2024-05-15 13:45:33.448411] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:20.359 [2024-05-15 13:45:33.448426] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:20.359 [2024-05-15 13:45:33.448439] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:20.359 [2024-05-15 13:45:33.448450] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:20.359 [2024-05-15 13:45:33.448490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:20.616 [2024-05-15 13:45:33.541011] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:20.616 null0 00:27:20.616 [2024-05-15 13:45:33.640813] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.616 [2024-05-15 13:45:33.664706] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:20.616 [2024-05-15 13:45:33.664933] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95786 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95786 /var/tmp/bperf.sock 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 95786 ']' 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:20.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:20.616 13:45:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:20.873 [2024-05-15 13:45:33.718960] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:27:20.873 [2024-05-15 13:45:33.719051] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95786 ] 00:27:20.873 [2024-05-15 13:45:33.848690] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:20.873 [2024-05-15 13:45:33.870455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.873 [2024-05-15 13:45:33.931270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.807 13:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:21.807 13:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:21.807 13:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:21.807 13:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:22.065 13:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:22.065 13:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.065 13:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:22.065 13:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.065 13:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:22.065 13:45:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:22.322 nvme0n1 00:27:22.322 13:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:22.323 13:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.323 13:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:22.323 13:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.323 13:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:22.323 13:45:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:22.323 Running I/O for 2 seconds... 00:27:22.323 [2024-05-15 13:45:35.395054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.323 [2024-05-15 13:45:35.395563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.323 [2024-05-15 13:45:35.395597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.323 [2024-05-15 13:45:35.411801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.323 [2024-05-15 13:45:35.411981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.323 [2024-05-15 13:45:35.412049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.581 [2024-05-15 13:45:35.428089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.581 [2024-05-15 13:45:35.428288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.581 [2024-05-15 13:45:35.428374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.581 [2024-05-15 13:45:35.444298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.581 [2024-05-15 13:45:35.444508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.581 [2024-05-15 13:45:35.444576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.581 [2024-05-15 13:45:35.460584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.581 [2024-05-15 13:45:35.460796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.581 [2024-05-15 13:45:35.460881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.581 [2024-05-15 13:45:35.476563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.581 [2024-05-15 13:45:35.476783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.581 [2024-05-15 13:45:35.476850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.581 [2024-05-15 13:45:35.491596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.581 [2024-05-15 13:45:35.491831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.581 [2024-05-15 13:45:35.491921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.581 [2024-05-15 13:45:35.507375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.581 [2024-05-15 13:45:35.507614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.581 [2024-05-15 13:45:35.507684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.581 [2024-05-15 13:45:35.524119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.581 [2024-05-15 13:45:35.524359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.581 [2024-05-15 13:45:35.524440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.581 [2024-05-15 13:45:35.540928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.581 [2024-05-15 13:45:35.541152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.581 [2024-05-15 13:45:35.541221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.581 [2024-05-15 13:45:35.557180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.581 [2024-05-15 13:45:35.557428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.581 [2024-05-15 13:45:35.557499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.581 [2024-05-15 13:45:35.572442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.581 [2024-05-15 13:45:35.572654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.581 [2024-05-15 13:45:35.572721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.581 [2024-05-15 13:45:35.588816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.581 [2024-05-15 13:45:35.589041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.581 [2024-05-15 13:45:35.589121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.581 [2024-05-15 13:45:35.605437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.581 [2024-05-15 13:45:35.605655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.581 [2024-05-15 13:45:35.605754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.581 [2024-05-15 13:45:35.621757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.581 [2024-05-15 13:45:35.621968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.581 [2024-05-15 13:45:35.622044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.581 [2024-05-15 13:45:35.637956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.581 [2024-05-15 13:45:35.638135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.581 [2024-05-15 13:45:35.638225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.581 [2024-05-15 13:45:35.653922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.581 [2024-05-15 13:45:35.654167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.581 [2024-05-15 13:45:35.654271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.581 [2024-05-15 13:45:35.669580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.581 [2024-05-15 13:45:35.669827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.581 [2024-05-15 13:45:35.669901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.839 [2024-05-15 13:45:35.685285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.839 [2024-05-15 13:45:35.685443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.839 [2024-05-15 13:45:35.685519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.839 [2024-05-15 13:45:35.700950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.839 [2024-05-15 13:45:35.701129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.839 [2024-05-15 13:45:35.701197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.839 [2024-05-15 13:45:35.716917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.839 [2024-05-15 13:45:35.717087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.839 [2024-05-15 13:45:35.717158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.839 [2024-05-15 13:45:35.732406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.839 [2024-05-15 13:45:35.732545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.839 [2024-05-15 13:45:35.732615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.839 [2024-05-15 13:45:35.748433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.839 [2024-05-15 13:45:35.748630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.839 [2024-05-15 13:45:35.748726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.839 [2024-05-15 13:45:35.765070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.839 [2024-05-15 13:45:35.765332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.839 [2024-05-15 13:45:35.765403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.839 [2024-05-15 13:45:35.781682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.839 [2024-05-15 13:45:35.781879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.839 [2024-05-15 13:45:35.781950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.839 [2024-05-15 13:45:35.797790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.839 [2024-05-15 13:45:35.798018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.839 [2024-05-15 13:45:35.798098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.839 [2024-05-15 13:45:35.812799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.839 [2024-05-15 13:45:35.812976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.839 [2024-05-15 13:45:35.813053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.839 [2024-05-15 13:45:35.827833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.839 [2024-05-15 13:45:35.828069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.839 [2024-05-15 13:45:35.828151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.839 [2024-05-15 13:45:35.843866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.839 [2024-05-15 13:45:35.844105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.839 [2024-05-15 13:45:35.844174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.839 [2024-05-15 13:45:35.859789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.839 [2024-05-15 13:45:35.860009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.839 [2024-05-15 13:45:35.860081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.839 [2024-05-15 13:45:35.876050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.839 [2024-05-15 13:45:35.876308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.839 [2024-05-15 13:45:35.876381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.839 [2024-05-15 13:45:35.893067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.840 [2024-05-15 13:45:35.893287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.840 [2024-05-15 13:45:35.893368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.840 [2024-05-15 13:45:35.909996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.840 [2024-05-15 13:45:35.910219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.840 [2024-05-15 13:45:35.910319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.840 [2024-05-15 13:45:35.926796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:22.840 [2024-05-15 13:45:35.926997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.840 [2024-05-15 13:45:35.927073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.099 [2024-05-15 13:45:35.943064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.099 [2024-05-15 13:45:35.943287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.099 [2024-05-15 13:45:35.943361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.099 [2024-05-15 13:45:35.959036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.099 [2024-05-15 13:45:35.959208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.099 [2024-05-15 13:45:35.959303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.099 [2024-05-15 13:45:35.974618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.099 [2024-05-15 13:45:35.974783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.099 [2024-05-15 13:45:35.974880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.099 [2024-05-15 13:45:35.991114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.099 [2024-05-15 13:45:35.991363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.099 [2024-05-15 13:45:35.991438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.099 [2024-05-15 13:45:36.007914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.099 [2024-05-15 13:45:36.008135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.099 [2024-05-15 13:45:36.008219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.099 [2024-05-15 13:45:36.024550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.099 [2024-05-15 13:45:36.024750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.099 [2024-05-15 13:45:36.024821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.099 [2024-05-15 13:45:36.040823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.099 [2024-05-15 13:45:36.040991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.099 [2024-05-15 13:45:36.041084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.099 [2024-05-15 13:45:36.057164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.099 [2024-05-15 13:45:36.057371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.099 [2024-05-15 13:45:36.057454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.099 [2024-05-15 13:45:36.073044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.099 [2024-05-15 13:45:36.073255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.099 [2024-05-15 13:45:36.073342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.099 [2024-05-15 13:45:36.089891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.099 [2024-05-15 13:45:36.090076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.099 [2024-05-15 13:45:36.090168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.099 [2024-05-15 13:45:36.107447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.099 [2024-05-15 13:45:36.107619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.099 [2024-05-15 13:45:36.107702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.099 [2024-05-15 13:45:36.124229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.099 [2024-05-15 13:45:36.124447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.099 [2024-05-15 13:45:36.124513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.099 [2024-05-15 13:45:36.140470] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.099 [2024-05-15 13:45:36.140698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.099 [2024-05-15 13:45:36.140760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.099 [2024-05-15 13:45:36.155662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.099 [2024-05-15 13:45:36.155860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.099 [2024-05-15 13:45:36.155929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.099 [2024-05-15 13:45:36.171100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.099 [2024-05-15 13:45:36.171345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.099 [2024-05-15 13:45:36.171444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.099 [2024-05-15 13:45:36.187559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.100 [2024-05-15 13:45:36.187786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.100 [2024-05-15 13:45:36.187864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.368 [2024-05-15 13:45:36.204353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.368 [2024-05-15 13:45:36.204571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.368 [2024-05-15 13:45:36.204644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.368 [2024-05-15 13:45:36.221118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.368 [2024-05-15 13:45:36.221372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.368 [2024-05-15 13:45:36.221459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.368 [2024-05-15 13:45:36.237182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.368 [2024-05-15 13:45:36.237394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.368 [2024-05-15 13:45:36.237459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.368 [2024-05-15 13:45:36.253328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.368 [2024-05-15 13:45:36.253532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.368 [2024-05-15 13:45:36.253613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.368 [2024-05-15 13:45:36.269537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.368 [2024-05-15 13:45:36.269750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.368 [2024-05-15 13:45:36.269813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.368 [2024-05-15 13:45:36.286023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.368 [2024-05-15 13:45:36.286228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.368 [2024-05-15 13:45:36.286336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.368 [2024-05-15 13:45:36.302880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.368 [2024-05-15 13:45:36.303094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.368 [2024-05-15 13:45:36.303161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.368 [2024-05-15 13:45:36.319553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.368 [2024-05-15 13:45:36.319744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.369 [2024-05-15 13:45:36.319840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.369 [2024-05-15 13:45:36.336825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.369 [2024-05-15 13:45:36.337074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.369 [2024-05-15 13:45:36.337147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.369 [2024-05-15 13:45:36.353843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.369 [2024-05-15 13:45:36.354052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.369 [2024-05-15 13:45:36.354148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.369 [2024-05-15 13:45:36.370412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.369 [2024-05-15 13:45:36.370610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.369 [2024-05-15 13:45:36.370701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.369 [2024-05-15 13:45:36.387023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.369 [2024-05-15 13:45:36.387272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.369 [2024-05-15 13:45:36.387357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.369 [2024-05-15 13:45:36.404049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.369 [2024-05-15 13:45:36.404578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.369 [2024-05-15 13:45:36.404684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.369 [2024-05-15 13:45:36.428594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.369 [2024-05-15 13:45:36.428806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.369 [2024-05-15 13:45:36.428871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.369 [2024-05-15 13:45:36.444813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.369 [2024-05-15 13:45:36.444983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.369 [2024-05-15 13:45:36.445082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.369 [2024-05-15 13:45:36.460947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.369 [2024-05-15 13:45:36.461120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.369 [2024-05-15 13:45:36.461202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.642 [2024-05-15 13:45:36.476412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.642 [2024-05-15 13:45:36.476598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.642 [2024-05-15 13:45:36.476674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.642 [2024-05-15 13:45:36.492531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.642 [2024-05-15 13:45:36.492718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.642 [2024-05-15 13:45:36.492807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.642 [2024-05-15 13:45:36.508919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.642 [2024-05-15 13:45:36.509106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.642 [2024-05-15 13:45:36.509194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.642 [2024-05-15 13:45:36.524948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.642 [2024-05-15 13:45:36.525121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.642 [2024-05-15 13:45:36.525195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.642 [2024-05-15 13:45:36.541064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.642 [2024-05-15 13:45:36.541323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.642 [2024-05-15 13:45:36.541400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.642 [2024-05-15 13:45:36.557502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.642 [2024-05-15 13:45:36.557686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.642 [2024-05-15 13:45:36.557785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.642 [2024-05-15 13:45:36.573739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.642 [2024-05-15 13:45:36.573981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.642 [2024-05-15 13:45:36.574052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.642 [2024-05-15 13:45:36.590418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.642 [2024-05-15 13:45:36.590606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.642 [2024-05-15 13:45:36.590658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.642 [2024-05-15 13:45:36.606061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.642 [2024-05-15 13:45:36.606283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.642 [2024-05-15 13:45:36.606356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.642 [2024-05-15 13:45:36.621476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.642 [2024-05-15 13:45:36.621666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.642 [2024-05-15 13:45:36.621777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.642 [2024-05-15 13:45:36.636498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.642 [2024-05-15 13:45:36.636677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.642 [2024-05-15 13:45:36.636767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.642 [2024-05-15 13:45:36.651875] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.642 [2024-05-15 13:45:36.652100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.642 [2024-05-15 13:45:36.652159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.642 [2024-05-15 13:45:36.667795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.642 [2024-05-15 13:45:36.668004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.642 [2024-05-15 13:45:36.668063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.642 [2024-05-15 13:45:36.683491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.642 [2024-05-15 13:45:36.683722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.642 [2024-05-15 13:45:36.683791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.642 [2024-05-15 13:45:36.699427] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.642 [2024-05-15 13:45:36.699601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.642 [2024-05-15 13:45:36.699694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.642 [2024-05-15 13:45:36.715801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.642 [2024-05-15 13:45:36.716021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.642 [2024-05-15 13:45:36.716093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.642 [2024-05-15 13:45:36.732208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.642 [2024-05-15 13:45:36.732428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.642 [2024-05-15 13:45:36.732492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.901 [2024-05-15 13:45:36.748199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.901 [2024-05-15 13:45:36.748415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.901 [2024-05-15 13:45:36.748485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.901 [2024-05-15 13:45:36.763981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.901 [2024-05-15 13:45:36.764151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.901 [2024-05-15 13:45:36.764235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.901 [2024-05-15 13:45:36.779329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.901 [2024-05-15 13:45:36.779533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.901 [2024-05-15 13:45:36.779629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.901 [2024-05-15 13:45:36.794946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.901 [2024-05-15 13:45:36.795119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.901 [2024-05-15 13:45:36.795205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.901 [2024-05-15 13:45:36.810118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.901 [2024-05-15 13:45:36.810333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.901 [2024-05-15 13:45:36.810411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.901 [2024-05-15 13:45:36.825781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.902 [2024-05-15 13:45:36.825965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.902 [2024-05-15 13:45:36.826034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.902 [2024-05-15 13:45:36.841586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.902 [2024-05-15 13:45:36.841779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.902 [2024-05-15 13:45:36.841837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.902 [2024-05-15 13:45:36.857444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.902 [2024-05-15 13:45:36.857663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.902 [2024-05-15 13:45:36.857745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.902 [2024-05-15 13:45:36.874272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.902 [2024-05-15 13:45:36.874506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.902 [2024-05-15 13:45:36.874574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.902 [2024-05-15 13:45:36.890888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.902 [2024-05-15 13:45:36.891079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.902 [2024-05-15 13:45:36.891169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.902 [2024-05-15 13:45:36.907704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.902 [2024-05-15 13:45:36.907935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.902 [2024-05-15 13:45:36.908000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.902 [2024-05-15 13:45:36.924334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.902 [2024-05-15 13:45:36.924556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.902 [2024-05-15 13:45:36.924621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.902 [2024-05-15 13:45:36.940902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.902 [2024-05-15 13:45:36.941139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.902 [2024-05-15 13:45:36.941226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.902 [2024-05-15 13:45:36.958012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.902 [2024-05-15 13:45:36.958280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.902 [2024-05-15 13:45:36.958369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.902 [2024-05-15 13:45:36.975209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.902 [2024-05-15 13:45:36.975454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.902 [2024-05-15 13:45:36.975521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.902 [2024-05-15 13:45:36.992440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:23.902 [2024-05-15 13:45:36.992666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.902 [2024-05-15 13:45:36.992753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.160 [2024-05-15 13:45:37.009264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:24.160 [2024-05-15 13:45:37.009507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.160 [2024-05-15 13:45:37.009582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.160 [2024-05-15 13:45:37.026009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:24.160 [2024-05-15 13:45:37.026260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.160 [2024-05-15 13:45:37.026341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.160 [2024-05-15 13:45:37.043012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:24.160 [2024-05-15 13:45:37.043207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.160 [2024-05-15 13:45:37.043321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.160 [2024-05-15 13:45:37.059527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:24.160 [2024-05-15 13:45:37.059746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.160 [2024-05-15 13:45:37.059805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.160 [2024-05-15 13:45:37.076166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:24.160 [2024-05-15 13:45:37.076366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.160 [2024-05-15 13:45:37.076420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.160 [2024-05-15 13:45:37.092105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:24.160 [2024-05-15 13:45:37.092323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.160 [2024-05-15 13:45:37.092415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.160 [2024-05-15 13:45:37.108245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:24.160 [2024-05-15 13:45:37.108443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.160 [2024-05-15 13:45:37.108506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.160 [2024-05-15 13:45:37.125470] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:24.160 [2024-05-15 13:45:37.125640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.160 [2024-05-15 13:45:37.125751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.160 [2024-05-15 13:45:37.142370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:24.160 [2024-05-15 13:45:37.142583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.160 [2024-05-15 13:45:37.142652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.160 [2024-05-15 13:45:37.159507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:24.160 [2024-05-15 13:45:37.159738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.160 [2024-05-15 13:45:37.159813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.160 [2024-05-15 13:45:37.176416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:24.161 [2024-05-15 13:45:37.176637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.161 [2024-05-15 13:45:37.176698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.161 [2024-05-15 13:45:37.192852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:24.161 [2024-05-15 13:45:37.193090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.161 [2024-05-15 13:45:37.193154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.161 [2024-05-15 13:45:37.209383] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:24.161 [2024-05-15 13:45:37.209571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.161 [2024-05-15 13:45:37.209664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.161 [2024-05-15 13:45:37.226352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:24.161 [2024-05-15 13:45:37.226568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.161 [2024-05-15 13:45:37.226647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.161 [2024-05-15 13:45:37.243165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:24.161 [2024-05-15 13:45:37.243392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.161 [2024-05-15 13:45:37.243475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.161 [2024-05-15 13:45:37.259908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:24.161 [2024-05-15 13:45:37.260130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.161 [2024-05-15 13:45:37.260197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.418 [2024-05-15 13:45:37.276431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:24.418 [2024-05-15 13:45:37.276646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.418 [2024-05-15 13:45:37.276724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.418 [2024-05-15 13:45:37.293093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:24.418 [2024-05-15 13:45:37.293335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.418 [2024-05-15 13:45:37.293408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.418 [2024-05-15 13:45:37.309515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:24.418 [2024-05-15 13:45:37.309723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.418 [2024-05-15 13:45:37.309827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.418 [2024-05-15 13:45:37.326015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:24.419 [2024-05-15 13:45:37.326180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.419 [2024-05-15 13:45:37.326301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.419 [2024-05-15 13:45:37.342188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:24.419 [2024-05-15 13:45:37.342408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.419 [2024-05-15 13:45:37.342477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.419 [2024-05-15 13:45:37.357353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:24.419 [2024-05-15 13:45:37.357518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.419 [2024-05-15 13:45:37.357598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.419 [2024-05-15 13:45:37.372427] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x207cc70) 00:27:24.419 [2024-05-15 13:45:37.372619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.419 [2024-05-15 13:45:37.372688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.419 00:27:24.419 Latency(us) 00:27:24.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.419 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:24.419 nvme0n1 : 2.01 15517.37 60.61 0.00 0.00 8241.64 6959.30 31706.94 00:27:24.419 =================================================================================================================== 00:27:24.419 Total : 15517.37 60.61 0.00 0.00 8241.64 6959.30 31706.94 00:27:24.419 0 00:27:24.419 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:24.419 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:24.419 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:24.419 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:24.419 | .driver_specific 00:27:24.419 | .nvme_error 00:27:24.419 | .status_code 00:27:24.419 | .command_transient_transport_error' 00:27:24.676 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 122 > 0 )) 00:27:24.676 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95786 00:27:24.676 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 95786 ']' 00:27:24.676 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 95786 00:27:24.676 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:24.676 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:24.676 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95786 00:27:24.676 killing process with pid 95786 00:27:24.676 Received shutdown signal, test time was about 2.000000 seconds 00:27:24.676 00:27:24.676 Latency(us) 00:27:24.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.676 =================================================================================================================== 00:27:24.676 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:24.676 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:24.676 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:24.676 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95786' 00:27:24.676 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 95786 00:27:24.676 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 95786 00:27:24.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:24.934 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:24.934 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:24.934 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:24.934 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:24.934 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:24.934 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95843 00:27:24.934 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95843 /var/tmp/bperf.sock 00:27:24.934 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 95843 ']' 00:27:24.934 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:24.934 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:24.934 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:24.934 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:24.934 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:24.934 13:45:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:24.934 [2024-05-15 13:45:38.014071] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:27:24.934 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:24.934 Zero copy mechanism will not be used. 00:27:24.934 [2024-05-15 13:45:38.014194] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95843 ] 00:27:25.192 [2024-05-15 13:45:38.142168] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:25.192 [2024-05-15 13:45:38.162744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.192 [2024-05-15 13:45:38.220288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.450 13:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:25.450 13:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:25.450 13:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:25.450 13:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:25.450 13:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:25.450 13:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.450 13:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:25.450 13:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.450 13:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:25.450 13:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:26.070 nvme0n1 00:27:26.070 13:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:26.070 13:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.070 13:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:26.070 13:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.070 13:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:26.070 13:45:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:26.070 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:26.070 Zero copy mechanism will not be used. 00:27:26.070 Running I/O for 2 seconds... 00:27:26.070 [2024-05-15 13:45:38.999628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.070 [2024-05-15 13:45:38.999888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.070 [2024-05-15 13:45:38.999983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.070 [2024-05-15 13:45:39.004199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.070 [2024-05-15 13:45:39.004415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.070 [2024-05-15 13:45:39.004499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.070 [2024-05-15 13:45:39.008630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.070 [2024-05-15 13:45:39.008798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.070 [2024-05-15 13:45:39.008874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.070 [2024-05-15 13:45:39.013068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.070 [2024-05-15 13:45:39.013255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.070 [2024-05-15 13:45:39.013347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.017555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.017987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.018084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.022231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.022406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.022494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.026693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.026866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.026963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.031114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.031297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.031389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.035536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.035699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.035787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.039904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.040052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.040120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.044283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.044425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.044488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.048536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.048687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.048767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.052863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.053001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.053070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.057191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.057365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.057451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.061603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.061759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.061858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.066068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.066232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.066334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.070472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.070617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.070686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.074843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.074973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.075039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.079214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.079371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.079436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.083484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.083615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.083705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.087845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.087979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.088057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.092277] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.092410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.092482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.096575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.096724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.096792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.101376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.101551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.101658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.106160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.106321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.106434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.110607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.110757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.110839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.114977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.115124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.115206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.119461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.119610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.119697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.071 [2024-05-15 13:45:39.123821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.071 [2024-05-15 13:45:39.123974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.071 [2024-05-15 13:45:39.124045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.387 [2024-05-15 13:45:39.128232] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.387 [2024-05-15 13:45:39.128403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.387 [2024-05-15 13:45:39.128493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.387 [2024-05-15 13:45:39.132713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.387 [2024-05-15 13:45:39.132874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.387 [2024-05-15 13:45:39.132983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.387 [2024-05-15 13:45:39.137152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.387 [2024-05-15 13:45:39.137318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.387 [2024-05-15 13:45:39.137417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.387 [2024-05-15 13:45:39.141529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.387 [2024-05-15 13:45:39.141688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.387 [2024-05-15 13:45:39.141790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.387 [2024-05-15 13:45:39.146002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.387 [2024-05-15 13:45:39.146173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.387 [2024-05-15 13:45:39.146292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.387 [2024-05-15 13:45:39.150456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.387 [2024-05-15 13:45:39.150619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.387 [2024-05-15 13:45:39.150699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.387 [2024-05-15 13:45:39.155503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.387 [2024-05-15 13:45:39.155682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.387 [2024-05-15 13:45:39.155762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.387 [2024-05-15 13:45:39.159982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.387 [2024-05-15 13:45:39.160147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.387 [2024-05-15 13:45:39.160246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.387 [2024-05-15 13:45:39.164978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.387 [2024-05-15 13:45:39.165159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.387 [2024-05-15 13:45:39.165257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.387 [2024-05-15 13:45:39.169509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.387 [2024-05-15 13:45:39.169683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.387 [2024-05-15 13:45:39.169787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.387 [2024-05-15 13:45:39.175091] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.387 [2024-05-15 13:45:39.175305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.387 [2024-05-15 13:45:39.175415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.387 [2024-05-15 13:45:39.179627] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.387 [2024-05-15 13:45:39.179796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.387 [2024-05-15 13:45:39.179896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.387 [2024-05-15 13:45:39.184001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.387 [2024-05-15 13:45:39.184153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.387 [2024-05-15 13:45:39.184274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.387 [2024-05-15 13:45:39.188456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.387 [2024-05-15 13:45:39.188609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.387 [2024-05-15 13:45:39.188682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.387 [2024-05-15 13:45:39.192765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.387 [2024-05-15 13:45:39.192905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.387 [2024-05-15 13:45:39.192975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.387 [2024-05-15 13:45:39.197117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.387 [2024-05-15 13:45:39.197248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.387 [2024-05-15 13:45:39.197350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.201499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.388 [2024-05-15 13:45:39.201663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.388 [2024-05-15 13:45:39.201772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.205878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.388 [2024-05-15 13:45:39.206029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.388 [2024-05-15 13:45:39.206093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.210244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.388 [2024-05-15 13:45:39.210378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.388 [2024-05-15 13:45:39.210461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.215055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.388 [2024-05-15 13:45:39.215269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.388 [2024-05-15 13:45:39.215362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.219504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.388 [2024-05-15 13:45:39.219677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.388 [2024-05-15 13:45:39.219762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.223924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.388 [2024-05-15 13:45:39.224105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.388 [2024-05-15 13:45:39.224185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.228425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.388 [2024-05-15 13:45:39.228609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.388 [2024-05-15 13:45:39.228676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.232717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.388 [2024-05-15 13:45:39.232873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.388 [2024-05-15 13:45:39.232944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.237099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.388 [2024-05-15 13:45:39.237263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.388 [2024-05-15 13:45:39.237342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.241455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.388 [2024-05-15 13:45:39.241606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.388 [2024-05-15 13:45:39.241675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.245833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.388 [2024-05-15 13:45:39.245958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.388 [2024-05-15 13:45:39.246038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.250186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.388 [2024-05-15 13:45:39.250403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.388 [2024-05-15 13:45:39.250497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.254543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.388 [2024-05-15 13:45:39.254675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.388 [2024-05-15 13:45:39.254755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.258953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.388 [2024-05-15 13:45:39.259125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.388 [2024-05-15 13:45:39.259197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.263429] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.388 [2024-05-15 13:45:39.263604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.388 [2024-05-15 13:45:39.263683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.267803] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.388 [2024-05-15 13:45:39.267972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.388 [2024-05-15 13:45:39.268024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.272185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.388 [2024-05-15 13:45:39.272369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.388 [2024-05-15 13:45:39.272448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.276518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.388 [2024-05-15 13:45:39.276691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.388 [2024-05-15 13:45:39.276757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.280966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.388 [2024-05-15 13:45:39.281107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.388 [2024-05-15 13:45:39.281177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.285323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.388 [2024-05-15 13:45:39.285434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.388 [2024-05-15 13:45:39.285524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.289634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.388 [2024-05-15 13:45:39.289786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.388 [2024-05-15 13:45:39.289855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.293945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.388 [2024-05-15 13:45:39.294074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.388 [2024-05-15 13:45:39.294135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.298196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.388 [2024-05-15 13:45:39.298343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.388 [2024-05-15 13:45:39.298417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.302498] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.388 [2024-05-15 13:45:39.302627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.388 [2024-05-15 13:45:39.302700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.388 [2024-05-15 13:45:39.306738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.389 [2024-05-15 13:45:39.306877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.389 [2024-05-15 13:45:39.306940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.389 [2024-05-15 13:45:39.311076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.389 [2024-05-15 13:45:39.311202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.389 [2024-05-15 13:45:39.311304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.389 [2024-05-15 13:45:39.315404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.389 [2024-05-15 13:45:39.315557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.389 [2024-05-15 13:45:39.315626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.389 [2024-05-15 13:45:39.319771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.389 [2024-05-15 13:45:39.319911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.389 [2024-05-15 13:45:39.319971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.389 [2024-05-15 13:45:39.324052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.389 [2024-05-15 13:45:39.324173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.389 [2024-05-15 13:45:39.324252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.389 [2024-05-15 13:45:39.328336] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.389 [2024-05-15 13:45:39.328443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.389 [2024-05-15 13:45:39.328520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.389 [2024-05-15 13:45:39.332877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.389 [2024-05-15 13:45:39.333015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.389 [2024-05-15 13:45:39.333087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.389 [2024-05-15 13:45:39.337760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.389 [2024-05-15 13:45:39.337905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.389 [2024-05-15 13:45:39.338005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.389 [2024-05-15 13:45:39.342175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.389 [2024-05-15 13:45:39.342363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.389 [2024-05-15 13:45:39.342447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.389 [2024-05-15 13:45:39.346560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.389 [2024-05-15 13:45:39.346677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.389 [2024-05-15 13:45:39.346752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.389 [2024-05-15 13:45:39.350903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.389 [2024-05-15 13:45:39.351020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.389 [2024-05-15 13:45:39.351070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.389 [2024-05-15 13:45:39.355170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.389 [2024-05-15 13:45:39.355330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.389 [2024-05-15 13:45:39.355397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.389 [2024-05-15 13:45:39.359579] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.389 [2024-05-15 13:45:39.359691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.389 [2024-05-15 13:45:39.359794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.389 [2024-05-15 13:45:39.363881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.389 [2024-05-15 13:45:39.364018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.389 [2024-05-15 13:45:39.364093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.389 [2024-05-15 13:45:39.368189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.389 [2024-05-15 13:45:39.368334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.389 [2024-05-15 13:45:39.368411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.389 [2024-05-15 13:45:39.372471] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.389 [2024-05-15 13:45:39.372600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.389 [2024-05-15 13:45:39.372660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.389 [2024-05-15 13:45:39.376747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.389 [2024-05-15 13:45:39.376875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.389 [2024-05-15 13:45:39.376945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.389 [2024-05-15 13:45:39.381046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.389 [2024-05-15 13:45:39.381184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.389 [2024-05-15 13:45:39.381295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.389 [2024-05-15 13:45:39.385335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.389 [2024-05-15 13:45:39.385457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.389 [2024-05-15 13:45:39.385515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.389 [2024-05-15 13:45:39.389540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.389 [2024-05-15 13:45:39.389639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.389 [2024-05-15 13:45:39.389708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.389 [2024-05-15 13:45:39.393759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.389 [2024-05-15 13:45:39.393876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.389 [2024-05-15 13:45:39.393955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.389 [2024-05-15 13:45:39.397979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.389 [2024-05-15 13:45:39.398083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.389 [2024-05-15 13:45:39.398149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.389 [2024-05-15 13:45:39.402271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.389 [2024-05-15 13:45:39.402369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.389 [2024-05-15 13:45:39.402443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.389 [2024-05-15 13:45:39.406564] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.389 [2024-05-15 13:45:39.406703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.389 [2024-05-15 13:45:39.406779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.389 [2024-05-15 13:45:39.410986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.390 [2024-05-15 13:45:39.411165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.390 [2024-05-15 13:45:39.411239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.390 [2024-05-15 13:45:39.415430] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.390 [2024-05-15 13:45:39.415623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.390 [2024-05-15 13:45:39.415717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.390 [2024-05-15 13:45:39.419802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.390 [2024-05-15 13:45:39.419960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.390 [2024-05-15 13:45:39.420039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.390 [2024-05-15 13:45:39.424080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.390 [2024-05-15 13:45:39.424209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.390 [2024-05-15 13:45:39.424306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.390 [2024-05-15 13:45:39.428348] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.390 [2024-05-15 13:45:39.428650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.390 [2024-05-15 13:45:39.428749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.390 [2024-05-15 13:45:39.432831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.390 [2024-05-15 13:45:39.433045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.390 [2024-05-15 13:45:39.433154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.390 [2024-05-15 13:45:39.437213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.390 [2024-05-15 13:45:39.437453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.390 [2024-05-15 13:45:39.437552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.390 [2024-05-15 13:45:39.441751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.390 [2024-05-15 13:45:39.441971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.390 [2024-05-15 13:45:39.442072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.390 [2024-05-15 13:45:39.446206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.390 [2024-05-15 13:45:39.446356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.390 [2024-05-15 13:45:39.446451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.390 [2024-05-15 13:45:39.450612] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.390 [2024-05-15 13:45:39.450872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.390 [2024-05-15 13:45:39.450985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.390 [2024-05-15 13:45:39.455177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.390 [2024-05-15 13:45:39.455334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.390 [2024-05-15 13:45:39.455426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.390 [2024-05-15 13:45:39.459610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.390 [2024-05-15 13:45:39.459940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.390 [2024-05-15 13:45:39.460040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.390 [2024-05-15 13:45:39.464225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.390 [2024-05-15 13:45:39.464487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.390 [2024-05-15 13:45:39.464603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.390 [2024-05-15 13:45:39.468759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.390 [2024-05-15 13:45:39.468997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.390 [2024-05-15 13:45:39.469121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.390 [2024-05-15 13:45:39.473271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.390 [2024-05-15 13:45:39.473502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.390 [2024-05-15 13:45:39.473612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.390 [2024-05-15 13:45:39.477662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.390 [2024-05-15 13:45:39.477894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.390 [2024-05-15 13:45:39.477991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.390 [2024-05-15 13:45:39.482046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.390 [2024-05-15 13:45:39.482271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.390 [2024-05-15 13:45:39.482378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.649 [2024-05-15 13:45:39.486493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.649 [2024-05-15 13:45:39.486735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.649 [2024-05-15 13:45:39.486857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.649 [2024-05-15 13:45:39.491141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.649 [2024-05-15 13:45:39.491300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.649 [2024-05-15 13:45:39.491372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.649 [2024-05-15 13:45:39.495466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.649 [2024-05-15 13:45:39.495701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.649 [2024-05-15 13:45:39.495780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.649 [2024-05-15 13:45:39.500058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.649 [2024-05-15 13:45:39.500393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.649 [2024-05-15 13:45:39.500544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.649 [2024-05-15 13:45:39.505501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.649 [2024-05-15 13:45:39.505920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.506084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.510810] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.650 [2024-05-15 13:45:39.511147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.511292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.515509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.650 [2024-05-15 13:45:39.515818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.515921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.520127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.650 [2024-05-15 13:45:39.520465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.520661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.524942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.650 [2024-05-15 13:45:39.525301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.525611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.530038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.650 [2024-05-15 13:45:39.530401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.530733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.535018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.650 [2024-05-15 13:45:39.535373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.535551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.540012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.650 [2024-05-15 13:45:39.540353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.540611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.544881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.650 [2024-05-15 13:45:39.545185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.545472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.549914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.650 [2024-05-15 13:45:39.550216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.550507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.554877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.650 [2024-05-15 13:45:39.555167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.555429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.559715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.650 [2024-05-15 13:45:39.559969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.560137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.564402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.650 [2024-05-15 13:45:39.564678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.564987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.569203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.650 [2024-05-15 13:45:39.569499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.569723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.574093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.650 [2024-05-15 13:45:39.574381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.574590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.578828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.650 [2024-05-15 13:45:39.579100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.579360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.583654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.650 [2024-05-15 13:45:39.583915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.584132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.588503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.650 [2024-05-15 13:45:39.588785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.588984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.593207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.650 [2024-05-15 13:45:39.593510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.593697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.597978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.650 [2024-05-15 13:45:39.598229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.598433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.602720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.650 [2024-05-15 13:45:39.602972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.603262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.607548] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.650 [2024-05-15 13:45:39.607824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.608084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.612503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.650 [2024-05-15 13:45:39.612890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.613269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.617739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.650 [2024-05-15 13:45:39.618041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.618312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.622594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.650 [2024-05-15 13:45:39.622867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.650 [2024-05-15 13:45:39.623103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.650 [2024-05-15 13:45:39.627335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.627523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.627796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.651 [2024-05-15 13:45:39.632007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.632278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.632476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.651 [2024-05-15 13:45:39.636778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.637024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.637281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.651 [2024-05-15 13:45:39.641496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.641769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.642017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.651 [2024-05-15 13:45:39.646241] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.646482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.646727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.651 [2024-05-15 13:45:39.651071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.651340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.651531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.651 [2024-05-15 13:45:39.655865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.656108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.656327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.651 [2024-05-15 13:45:39.660593] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.660862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.661110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.651 [2024-05-15 13:45:39.665375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.665638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.665893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.651 [2024-05-15 13:45:39.670446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.670709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.670909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.651 [2024-05-15 13:45:39.675363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.675682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.675989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.651 [2024-05-15 13:45:39.680307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.680632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.680960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.651 [2024-05-15 13:45:39.685376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.685672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.685898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.651 [2024-05-15 13:45:39.690235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.690548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.690788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.651 [2024-05-15 13:45:39.695144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.695469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.695649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.651 [2024-05-15 13:45:39.700246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.700606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.700829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.651 [2024-05-15 13:45:39.705118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.705410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.705642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.651 [2024-05-15 13:45:39.709980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.710290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.710501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.651 [2024-05-15 13:45:39.715197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.715544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.715789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.651 [2024-05-15 13:45:39.720222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.720596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.720804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.651 [2024-05-15 13:45:39.725039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.725371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.725561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.651 [2024-05-15 13:45:39.730021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.730360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.730660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.651 [2024-05-15 13:45:39.735071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.735380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.735618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.651 [2024-05-15 13:45:39.740299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.740575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.740766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.651 [2024-05-15 13:45:39.745180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.651 [2024-05-15 13:45:39.745516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.651 [2024-05-15 13:45:39.745760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.909 [2024-05-15 13:45:39.750016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.909 [2024-05-15 13:45:39.750332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.909 [2024-05-15 13:45:39.750566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.909 [2024-05-15 13:45:39.755088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.909 [2024-05-15 13:45:39.755393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.909 [2024-05-15 13:45:39.755658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.909 [2024-05-15 13:45:39.759878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.909 [2024-05-15 13:45:39.760148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.909 [2024-05-15 13:45:39.760401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.909 [2024-05-15 13:45:39.764611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.909 [2024-05-15 13:45:39.764844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.909 [2024-05-15 13:45:39.765039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.909 [2024-05-15 13:45:39.769292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.909 [2024-05-15 13:45:39.769531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.909 [2024-05-15 13:45:39.769747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.909 [2024-05-15 13:45:39.773929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.909 [2024-05-15 13:45:39.774188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.909 [2024-05-15 13:45:39.774424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.909 [2024-05-15 13:45:39.778682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.909 [2024-05-15 13:45:39.778927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.909 [2024-05-15 13:45:39.779120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.909 [2024-05-15 13:45:39.783401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.909 [2024-05-15 13:45:39.783635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.909 [2024-05-15 13:45:39.783792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.909 [2024-05-15 13:45:39.788047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.909 [2024-05-15 13:45:39.788366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.909 [2024-05-15 13:45:39.788555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.909 [2024-05-15 13:45:39.792786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.909 [2024-05-15 13:45:39.793033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.793216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.797497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.797756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.798009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.802538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.802785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.803077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.807241] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.807505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.807749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.812017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.812303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.812601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.816876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.817134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.817426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.821657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.821927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.822197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.826473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.826707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.826961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.831352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.831674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.832006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.836437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.836719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.837004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.841330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.841606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.841809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.845993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.846295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.846638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.851041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.851311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.851531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.855816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.856088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.856291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.860523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.860762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.860967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.865310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.865570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.865798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.869971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.870223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.870436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.874872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.875192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.875507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.879858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.880143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.880332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.884560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.884863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.885084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.889472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.889754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.889955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.894294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.894595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.894887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.899207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.899497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.899685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.903986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.904276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.904525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.908895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.909148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.909391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.913702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.913989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.914288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.918879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.919187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.919621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.924063] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.924391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.924669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.928976] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.929284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.929653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.934048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.934335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.934628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.938996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.939343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.939553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.943954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.944328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.944501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.948855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.949205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.949478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.953868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.954171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.954474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.958883] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.959210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.959402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.963728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.964027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.964340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.968667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.968977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.969172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.973555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.973926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.974130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.978557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.978837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.979053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.983353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.983637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.983818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.988068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.988350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.988578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.992902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.993167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.993374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:39.997641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:39.997990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:39.998166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:40.002496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:40.002851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:40.003041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:26.910 [2024-05-15 13:45:40.007367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:26.910 [2024-05-15 13:45:40.007666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.910 [2024-05-15 13:45:40.007880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.167 [2024-05-15 13:45:40.012194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.167 [2024-05-15 13:45:40.012503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.167 [2024-05-15 13:45:40.012720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.167 [2024-05-15 13:45:40.016959] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.167 [2024-05-15 13:45:40.017220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.167 [2024-05-15 13:45:40.017421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.167 [2024-05-15 13:45:40.021701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.167 [2024-05-15 13:45:40.022276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.167 [2024-05-15 13:45:40.022471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.167 [2024-05-15 13:45:40.026823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.167 [2024-05-15 13:45:40.027119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.167 [2024-05-15 13:45:40.027486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.167 [2024-05-15 13:45:40.031763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.167 [2024-05-15 13:45:40.032015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.167 [2024-05-15 13:45:40.032248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.167 [2024-05-15 13:45:40.036429] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.036726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.036895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.041106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.041400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.041595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.045922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.046249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.046470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.050794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.051112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.051282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.055364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.055653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.055895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.060012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.060334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.060560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.064692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.064977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.065200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.069407] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.069668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.070004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.074289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.074564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.074770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.079036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.079328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.079560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.083867] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.084153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.084432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.088551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.088802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.089053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.093354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.093634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.093987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.098186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.098491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.098712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.102747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.103024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.103175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.107422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.107709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.107907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.112021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.112323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.112564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.116573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.116795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.117010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.121309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.121582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.121778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.126041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.126344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.126542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.130699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.130962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.131153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.135454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.135742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.135908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.140136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.140453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.140681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.144867] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.145133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.145361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.149509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.149790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.149985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.154305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.154561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.154755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.159127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.159397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.159595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.163863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.164123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.164337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.168491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.168745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.168909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.173248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.173504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.173692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.178577] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.178846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.179040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.183371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.183630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.183799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.188602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.188890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.189095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.193516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.193796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.194102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.199094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.199404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.199602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.203881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.204207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.204482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.208810] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.209107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.209426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.213796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.214091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.214381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.218746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.219060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.219334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.223747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.224037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.224254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.228539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.228823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.229004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.233201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.233524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.233853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.238364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.238651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.238970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.243329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.243590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.243776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.248084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.248373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.168 [2024-05-15 13:45:40.248600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.168 [2024-05-15 13:45:40.252938] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.168 [2024-05-15 13:45:40.253292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.169 [2024-05-15 13:45:40.253514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.169 [2024-05-15 13:45:40.257809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.169 [2024-05-15 13:45:40.258069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.169 [2024-05-15 13:45:40.258265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.169 [2024-05-15 13:45:40.262455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.169 [2024-05-15 13:45:40.262706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.169 [2024-05-15 13:45:40.262907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.169 [2024-05-15 13:45:40.267154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.428 [2024-05-15 13:45:40.267452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.428 [2024-05-15 13:45:40.267710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.428 [2024-05-15 13:45:40.271975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.428 [2024-05-15 13:45:40.272206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.428 [2024-05-15 13:45:40.272497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.428 [2024-05-15 13:45:40.276778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.428 [2024-05-15 13:45:40.277043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.428 [2024-05-15 13:45:40.277376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.428 [2024-05-15 13:45:40.281684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.428 [2024-05-15 13:45:40.281940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.428 [2024-05-15 13:45:40.282107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.428 [2024-05-15 13:45:40.286346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.428 [2024-05-15 13:45:40.286611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.428 [2024-05-15 13:45:40.286859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.428 [2024-05-15 13:45:40.291144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.428 [2024-05-15 13:45:40.291418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.428 [2024-05-15 13:45:40.291643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.428 [2024-05-15 13:45:40.295906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.428 [2024-05-15 13:45:40.296150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.428 [2024-05-15 13:45:40.296403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.428 [2024-05-15 13:45:40.300694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.428 [2024-05-15 13:45:40.300972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.428 [2024-05-15 13:45:40.301156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.428 [2024-05-15 13:45:40.305401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.428 [2024-05-15 13:45:40.305646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.428 [2024-05-15 13:45:40.305849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.428 [2024-05-15 13:45:40.310132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.428 [2024-05-15 13:45:40.310430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.428 [2024-05-15 13:45:40.310655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.428 [2024-05-15 13:45:40.314879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.428 [2024-05-15 13:45:40.315122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.428 [2024-05-15 13:45:40.315364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.428 [2024-05-15 13:45:40.319571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.428 [2024-05-15 13:45:40.319816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.428 [2024-05-15 13:45:40.320024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.428 [2024-05-15 13:45:40.324184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.428 [2024-05-15 13:45:40.324439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.428 [2024-05-15 13:45:40.324605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.428 [2024-05-15 13:45:40.328823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.428 [2024-05-15 13:45:40.329081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.428 [2024-05-15 13:45:40.329329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.428 [2024-05-15 13:45:40.333731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.428 [2024-05-15 13:45:40.333978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.428 [2024-05-15 13:45:40.334177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.428 [2024-05-15 13:45:40.338388] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.428 [2024-05-15 13:45:40.338652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.428 [2024-05-15 13:45:40.338888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.428 [2024-05-15 13:45:40.343233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.428 [2024-05-15 13:45:40.343520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.428 [2024-05-15 13:45:40.343697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.428 [2024-05-15 13:45:40.348037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.428 [2024-05-15 13:45:40.348355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.428 [2024-05-15 13:45:40.348618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.428 [2024-05-15 13:45:40.352929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.428 [2024-05-15 13:45:40.353189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.428 [2024-05-15 13:45:40.353454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.428 [2024-05-15 13:45:40.357870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.428 [2024-05-15 13:45:40.358181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.428 [2024-05-15 13:45:40.358405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.428 [2024-05-15 13:45:40.362789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.428 [2024-05-15 13:45:40.363064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.428 [2024-05-15 13:45:40.363343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.367666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.367967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.368287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.372613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.372884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.373145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.377563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.377900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.378126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.382600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.382880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.383059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.387524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.387829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.388035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.392493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.392813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.393027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.397348] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.397641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.397860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.402331] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.402606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.402775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.407470] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.407781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.408106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.412448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.412715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.412880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.417432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.417695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.417882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.422301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.422599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.422875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.427296] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.427576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.427781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.432213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.432552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.432810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.437338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.437620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.437953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.442566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.442870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.443134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.447655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.447928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.448143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.452858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.453129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.453312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.457744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.458148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.458517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.463749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.464154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.464479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.469663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.470099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.470401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.475477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.475834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.476108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.481169] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.481568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.481833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.486329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.486602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.486854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.491199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.491512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.491828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.496174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.496477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.429 [2024-05-15 13:45:40.496738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.429 [2024-05-15 13:45:40.500897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.429 [2024-05-15 13:45:40.501139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.430 [2024-05-15 13:45:40.501398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.430 [2024-05-15 13:45:40.505609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.430 [2024-05-15 13:45:40.505998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.430 [2024-05-15 13:45:40.506345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.430 [2024-05-15 13:45:40.510739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.430 [2024-05-15 13:45:40.511009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.430 [2024-05-15 13:45:40.511179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.430 [2024-05-15 13:45:40.515359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.430 [2024-05-15 13:45:40.515624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.430 [2024-05-15 13:45:40.515858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.430 [2024-05-15 13:45:40.520544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.430 [2024-05-15 13:45:40.520887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.430 [2024-05-15 13:45:40.521209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.430 [2024-05-15 13:45:40.526121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.430 [2024-05-15 13:45:40.526411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.430 [2024-05-15 13:45:40.526584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.689 [2024-05-15 13:45:40.530881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.689 [2024-05-15 13:45:40.531164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.689 [2024-05-15 13:45:40.531418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.689 [2024-05-15 13:45:40.535726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.689 [2024-05-15 13:45:40.535991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.689 [2024-05-15 13:45:40.536258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.689 [2024-05-15 13:45:40.540630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.689 [2024-05-15 13:45:40.540943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.689 [2024-05-15 13:45:40.541225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.689 [2024-05-15 13:45:40.546203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.689 [2024-05-15 13:45:40.546522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.689 [2024-05-15 13:45:40.546727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.689 [2024-05-15 13:45:40.551171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.689 [2024-05-15 13:45:40.551493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.689 [2024-05-15 13:45:40.551675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.689 [2024-05-15 13:45:40.556067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.689 [2024-05-15 13:45:40.556419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.689 [2024-05-15 13:45:40.556725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.689 [2024-05-15 13:45:40.561125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.689 [2024-05-15 13:45:40.561402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.689 [2024-05-15 13:45:40.561570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.689 [2024-05-15 13:45:40.565977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.689 [2024-05-15 13:45:40.566297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.689 [2024-05-15 13:45:40.566656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.689 [2024-05-15 13:45:40.570910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.689 [2024-05-15 13:45:40.571278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.689 [2024-05-15 13:45:40.571490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.689 [2024-05-15 13:45:40.575829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.576113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.576499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.580985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.581284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.581463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.585952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.586200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.586413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.590605] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.590852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.591015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.595401] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.595721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.596095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.600486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.600753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.600953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.605215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.605521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.605738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.610099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.610407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.610592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.614968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.615261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.615472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.619799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.620060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.620337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.624620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.624897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.625063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.629494] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.629788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.630055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.634424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.634709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.634894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.639148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.639447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.639643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.643943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.644189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.644425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.648800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.649079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.649284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.653507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.653771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.654004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.658391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.658658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.658868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.663349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.663634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.663841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.668247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.668530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.668714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.673062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.673327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.673497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.677911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.678158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.678382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.682619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.682863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.683070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.687490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.687760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.687977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.692363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.690 [2024-05-15 13:45:40.692613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.690 [2024-05-15 13:45:40.692848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.690 [2024-05-15 13:45:40.697054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.691 [2024-05-15 13:45:40.697307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.691 [2024-05-15 13:45:40.697649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.691 [2024-05-15 13:45:40.702050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.691 [2024-05-15 13:45:40.702324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.691 [2024-05-15 13:45:40.702532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.691 [2024-05-15 13:45:40.706832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.691 [2024-05-15 13:45:40.707093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.691 [2024-05-15 13:45:40.707274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.691 [2024-05-15 13:45:40.711488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.691 [2024-05-15 13:45:40.711776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.691 [2024-05-15 13:45:40.711961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.691 [2024-05-15 13:45:40.716230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.691 [2024-05-15 13:45:40.716538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.691 [2024-05-15 13:45:40.716812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.691 [2024-05-15 13:45:40.721098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.691 [2024-05-15 13:45:40.721370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.691 [2024-05-15 13:45:40.721615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.691 [2024-05-15 13:45:40.726314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.691 [2024-05-15 13:45:40.726678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.691 [2024-05-15 13:45:40.726868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.691 [2024-05-15 13:45:40.731264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.691 [2024-05-15 13:45:40.731527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.691 [2024-05-15 13:45:40.731795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.691 [2024-05-15 13:45:40.736346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.691 [2024-05-15 13:45:40.736597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.691 [2024-05-15 13:45:40.736802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.691 [2024-05-15 13:45:40.741156] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.691 [2024-05-15 13:45:40.741450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.691 [2024-05-15 13:45:40.741683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.691 [2024-05-15 13:45:40.746047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.691 [2024-05-15 13:45:40.746359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.691 [2024-05-15 13:45:40.746600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.691 [2024-05-15 13:45:40.750999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.691 [2024-05-15 13:45:40.751273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.691 [2024-05-15 13:45:40.751485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.691 [2024-05-15 13:45:40.755872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.691 [2024-05-15 13:45:40.756296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.691 [2024-05-15 13:45:40.756545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.691 [2024-05-15 13:45:40.760907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.691 [2024-05-15 13:45:40.761198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.691 [2024-05-15 13:45:40.761471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.691 [2024-05-15 13:45:40.765832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.691 [2024-05-15 13:45:40.766135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.691 [2024-05-15 13:45:40.766423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.691 [2024-05-15 13:45:40.770855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.691 [2024-05-15 13:45:40.771155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.691 [2024-05-15 13:45:40.771440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.691 [2024-05-15 13:45:40.775696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.691 [2024-05-15 13:45:40.775940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.691 [2024-05-15 13:45:40.776108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.691 [2024-05-15 13:45:40.780531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.691 [2024-05-15 13:45:40.780776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.691 [2024-05-15 13:45:40.780969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.691 [2024-05-15 13:45:40.785218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.691 [2024-05-15 13:45:40.785554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.691 [2024-05-15 13:45:40.785812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.951 [2024-05-15 13:45:40.790303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.951 [2024-05-15 13:45:40.790539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.951 [2024-05-15 13:45:40.790724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.951 [2024-05-15 13:45:40.795065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.951 [2024-05-15 13:45:40.795345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.951 [2024-05-15 13:45:40.795550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.952 [2024-05-15 13:45:40.799932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.952 [2024-05-15 13:45:40.800222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.952 [2024-05-15 13:45:40.800547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.952 [2024-05-15 13:45:40.804747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.952 [2024-05-15 13:45:40.804990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.952 [2024-05-15 13:45:40.805159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.952 [2024-05-15 13:45:40.809733] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.952 [2024-05-15 13:45:40.810017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.952 [2024-05-15 13:45:40.810343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.952 [2024-05-15 13:45:40.814852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.952 [2024-05-15 13:45:40.815125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.952 [2024-05-15 13:45:40.815350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.952 [2024-05-15 13:45:40.819751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.952 [2024-05-15 13:45:40.820030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.952 [2024-05-15 13:45:40.820253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.952 [2024-05-15 13:45:40.824618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.952 [2024-05-15 13:45:40.824948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.952 [2024-05-15 13:45:40.825180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.952 [2024-05-15 13:45:40.829749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.952 [2024-05-15 13:45:40.830018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.952 [2024-05-15 13:45:40.830195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.952 [2024-05-15 13:45:40.834472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.952 [2024-05-15 13:45:40.834745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.952 [2024-05-15 13:45:40.834976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.952 [2024-05-15 13:45:40.839392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.952 [2024-05-15 13:45:40.839688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.952 [2024-05-15 13:45:40.839958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.952 [2024-05-15 13:45:40.844444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.952 [2024-05-15 13:45:40.844768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.952 [2024-05-15 13:45:40.844927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.952 [2024-05-15 13:45:40.849275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.952 [2024-05-15 13:45:40.849588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.952 [2024-05-15 13:45:40.849828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.952 [2024-05-15 13:45:40.854161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.952 [2024-05-15 13:45:40.854451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.952 [2024-05-15 13:45:40.854634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.952 [2024-05-15 13:45:40.859049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.952 [2024-05-15 13:45:40.859339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.952 [2024-05-15 13:45:40.859676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.952 [2024-05-15 13:45:40.863983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.952 [2024-05-15 13:45:40.864232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.952 [2024-05-15 13:45:40.864494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.952 [2024-05-15 13:45:40.868706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.952 [2024-05-15 13:45:40.868991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.952 [2024-05-15 13:45:40.869228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.952 [2024-05-15 13:45:40.873553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.952 [2024-05-15 13:45:40.873822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.952 [2024-05-15 13:45:40.874104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.952 [2024-05-15 13:45:40.878597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.952 [2024-05-15 13:45:40.878861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.952 [2024-05-15 13:45:40.879176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.952 [2024-05-15 13:45:40.883489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.952 [2024-05-15 13:45:40.883757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.952 [2024-05-15 13:45:40.883986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.952 [2024-05-15 13:45:40.888397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.952 [2024-05-15 13:45:40.888653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.952 [2024-05-15 13:45:40.888939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.952 [2024-05-15 13:45:40.893286] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.952 [2024-05-15 13:45:40.893540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.952 [2024-05-15 13:45:40.893766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.952 [2024-05-15 13:45:40.898115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.952 [2024-05-15 13:45:40.898375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.952 [2024-05-15 13:45:40.898617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.953 [2024-05-15 13:45:40.902773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.953 [2024-05-15 13:45:40.903047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.953 [2024-05-15 13:45:40.903373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.953 [2024-05-15 13:45:40.907689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.953 [2024-05-15 13:45:40.907941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.953 [2024-05-15 13:45:40.908145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.953 [2024-05-15 13:45:40.912415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.953 [2024-05-15 13:45:40.912652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.953 [2024-05-15 13:45:40.912840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.953 [2024-05-15 13:45:40.917076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.953 [2024-05-15 13:45:40.917351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.953 [2024-05-15 13:45:40.917553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.953 [2024-05-15 13:45:40.921941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.953 [2024-05-15 13:45:40.922189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.953 [2024-05-15 13:45:40.922508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.953 [2024-05-15 13:45:40.926816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.953 [2024-05-15 13:45:40.927060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.953 [2024-05-15 13:45:40.927250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.953 [2024-05-15 13:45:40.931500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.953 [2024-05-15 13:45:40.931742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.953 [2024-05-15 13:45:40.931980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.953 [2024-05-15 13:45:40.936103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.953 [2024-05-15 13:45:40.936357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.953 [2024-05-15 13:45:40.936606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.953 [2024-05-15 13:45:40.940928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.953 [2024-05-15 13:45:40.941188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.953 [2024-05-15 13:45:40.941559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.953 [2024-05-15 13:45:40.945903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.953 [2024-05-15 13:45:40.946164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.953 [2024-05-15 13:45:40.946349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.953 [2024-05-15 13:45:40.950634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.953 [2024-05-15 13:45:40.950887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.953 [2024-05-15 13:45:40.951090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.953 [2024-05-15 13:45:40.955342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.953 [2024-05-15 13:45:40.955596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.953 [2024-05-15 13:45:40.955808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.953 [2024-05-15 13:45:40.960025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.953 [2024-05-15 13:45:40.960266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.953 [2024-05-15 13:45:40.960444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.953 [2024-05-15 13:45:40.964678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.953 [2024-05-15 13:45:40.964908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.953 [2024-05-15 13:45:40.965099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.953 [2024-05-15 13:45:40.969279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.953 [2024-05-15 13:45:40.969514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.953 [2024-05-15 13:45:40.969841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.953 [2024-05-15 13:45:40.973995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.953 [2024-05-15 13:45:40.974233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.953 [2024-05-15 13:45:40.974432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.953 [2024-05-15 13:45:40.978720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.953 [2024-05-15 13:45:40.978956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.953 [2024-05-15 13:45:40.979148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:27.953 [2024-05-15 13:45:40.983467] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.953 [2024-05-15 13:45:40.983710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.953 [2024-05-15 13:45:40.983890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:27.953 [2024-05-15 13:45:40.988190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.953 [2024-05-15 13:45:40.988447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.953 [2024-05-15 13:45:40.988658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:27.953 [2024-05-15 13:45:40.992899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23110f0) 00:27:27.953 [2024-05-15 13:45:40.993125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.953 [2024-05-15 13:45:40.993338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:27.953 00:27:27.953 Latency(us) 00:27:27.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.953 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:27.953 nvme0n1 : 2.00 6509.60 813.70 0.00 0.00 2454.35 1864.66 6023.07 00:27:27.953 =================================================================================================================== 00:27:27.953 Total : 6509.60 813.70 0.00 0.00 2454.35 1864.66 6023.07 00:27:27.953 0 00:27:27.953 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:27.953 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:27.953 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:27.953 | .driver_specific 00:27:27.953 | .nvme_error 00:27:27.953 | .status_code 00:27:27.953 | .command_transient_transport_error' 00:27:27.953 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 420 > 0 )) 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95843 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 95843 ']' 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 95843 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95843 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95843' 00:27:28.519 killing process with pid 95843 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 95843 00:27:28.519 Received shutdown signal, test time was about 2.000000 seconds 00:27:28.519 00:27:28.519 Latency(us) 00:27:28.519 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:28.519 =================================================================================================================== 00:27:28.519 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 95843 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95896 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95896 /var/tmp/bperf.sock 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 95896 ']' 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:28.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:28.519 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:28.778 [2024-05-15 13:45:41.640523] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:27:28.778 [2024-05-15 13:45:41.640961] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95896 ] 00:27:28.778 [2024-05-15 13:45:41.770373] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:28.778 [2024-05-15 13:45:41.790577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.778 [2024-05-15 13:45:41.852424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.036 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:29.036 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:29.036 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:29.036 13:45:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:29.293 13:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:29.293 13:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.293 13:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:29.293 13:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.293 13:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:29.293 13:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:29.550 nvme0n1 00:27:29.550 13:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:29.550 13:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.550 13:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:29.550 13:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.550 13:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:29.550 13:45:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:29.807 Running I/O for 2 seconds... 00:27:29.807 [2024-05-15 13:45:42.739516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190fef90 00:27:29.807 [2024-05-15 13:45:42.742463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.807 [2024-05-15 13:45:42.742989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:29.807 [2024-05-15 13:45:42.756420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190feb58 00:27:29.807 [2024-05-15 13:45:42.759149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.807 [2024-05-15 13:45:42.759513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:29.807 [2024-05-15 13:45:42.773306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190fe2e8 00:27:29.807 [2024-05-15 13:45:42.775973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.807 [2024-05-15 13:45:42.776291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:29.807 [2024-05-15 13:45:42.789739] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190fda78 00:27:29.807 [2024-05-15 13:45:42.792387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.807 [2024-05-15 13:45:42.792681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:29.807 [2024-05-15 13:45:42.806273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190fd208 00:27:29.807 [2024-05-15 13:45:42.808872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.807 [2024-05-15 13:45:42.809150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:29.807 [2024-05-15 13:45:42.822644] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190fc998 00:27:29.807 [2024-05-15 13:45:42.825265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.807 [2024-05-15 13:45:42.825555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:29.807 [2024-05-15 13:45:42.838986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190fc128 00:27:29.807 [2024-05-15 13:45:42.841583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.807 [2024-05-15 13:45:42.841947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:29.807 [2024-05-15 13:45:42.855749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190fb8b8 00:27:29.807 [2024-05-15 13:45:42.858338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.807 [2024-05-15 13:45:42.858668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:29.807 [2024-05-15 13:45:42.872208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190fb048 00:27:29.807 [2024-05-15 13:45:42.874770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.807 [2024-05-15 13:45:42.875076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:29.807 [2024-05-15 13:45:42.888676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190fa7d8 00:27:29.807 [2024-05-15 13:45:42.891206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.807 [2024-05-15 13:45:42.891518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:29.807 [2024-05-15 13:45:42.905161] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f9f68 00:27:30.064 [2024-05-15 13:45:42.907698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.064 [2024-05-15 13:45:42.908018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:30.064 [2024-05-15 13:45:42.922056] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f96f8 00:27:30.064 [2024-05-15 13:45:42.924558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.064 [2024-05-15 13:45:42.924905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:30.064 [2024-05-15 13:45:42.938548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f8e88 00:27:30.064 [2024-05-15 13:45:42.940984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.064 [2024-05-15 13:45:42.941354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:30.064 [2024-05-15 13:45:42.955050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f8618 00:27:30.064 [2024-05-15 13:45:42.957496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.064 [2024-05-15 13:45:42.957788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:30.064 [2024-05-15 13:45:42.971378] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f7da8 00:27:30.064 [2024-05-15 13:45:42.973777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.064 [2024-05-15 13:45:42.974047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:30.064 [2024-05-15 13:45:42.987671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f7538 00:27:30.064 [2024-05-15 13:45:42.990034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.064 [2024-05-15 13:45:42.990379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:30.064 [2024-05-15 13:45:43.003961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f6cc8 00:27:30.064 [2024-05-15 13:45:43.006322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.064 [2024-05-15 13:45:43.006595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:30.064 [2024-05-15 13:45:43.020129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f6458 00:27:30.064 [2024-05-15 13:45:43.022486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.064 [2024-05-15 13:45:43.022741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:30.064 [2024-05-15 13:45:43.036646] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f5be8 00:27:30.064 [2024-05-15 13:45:43.038944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.064 [2024-05-15 13:45:43.039133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:30.064 [2024-05-15 13:45:43.052553] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f5378 00:27:30.064 [2024-05-15 13:45:43.054846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.064 [2024-05-15 13:45:43.055039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:30.064 [2024-05-15 13:45:43.068632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f4b08 00:27:30.064 [2024-05-15 13:45:43.070940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.064 [2024-05-15 13:45:43.071147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:30.064 [2024-05-15 13:45:43.084710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f4298 00:27:30.064 [2024-05-15 13:45:43.086995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.064 [2024-05-15 13:45:43.087194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:30.064 [2024-05-15 13:45:43.100822] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f3a28 00:27:30.064 [2024-05-15 13:45:43.103075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.064 [2024-05-15 13:45:43.103266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:30.064 [2024-05-15 13:45:43.116765] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f31b8 00:27:30.064 [2024-05-15 13:45:43.119024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.064 [2024-05-15 13:45:43.119215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:30.064 [2024-05-15 13:45:43.132690] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f2948 00:27:30.064 [2024-05-15 13:45:43.134937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.064 [2024-05-15 13:45:43.135126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:30.064 [2024-05-15 13:45:43.148643] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f20d8 00:27:30.064 [2024-05-15 13:45:43.150841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.064 [2024-05-15 13:45:43.151030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:30.327 [2024-05-15 13:45:43.164595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f1868 00:27:30.327 [2024-05-15 13:45:43.166764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.327 [2024-05-15 13:45:43.166953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:30.327 [2024-05-15 13:45:43.180556] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f0ff8 00:27:30.327 [2024-05-15 13:45:43.182713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.327 [2024-05-15 13:45:43.182910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:30.327 [2024-05-15 13:45:43.196545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f0788 00:27:30.327 [2024-05-15 13:45:43.198713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.327 [2024-05-15 13:45:43.198942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:30.327 [2024-05-15 13:45:43.212907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190eff18 00:27:30.327 [2024-05-15 13:45:43.215057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.327 [2024-05-15 13:45:43.215272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:30.327 [2024-05-15 13:45:43.228963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190ef6a8 00:27:30.327 [2024-05-15 13:45:43.231051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.327 [2024-05-15 13:45:43.231243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:30.327 [2024-05-15 13:45:43.245275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190eee38 00:27:30.327 [2024-05-15 13:45:43.247368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.327 [2024-05-15 13:45:43.247560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:30.327 [2024-05-15 13:45:43.261566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190ee5c8 00:27:30.327 [2024-05-15 13:45:43.264115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.327 [2024-05-15 13:45:43.264308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:30.327 [2024-05-15 13:45:43.278350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190edd58 00:27:30.327 [2024-05-15 13:45:43.280414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.327 [2024-05-15 13:45:43.280611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:30.327 [2024-05-15 13:45:43.294523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190ed4e8 00:27:30.327 [2024-05-15 13:45:43.296533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.327 [2024-05-15 13:45:43.296719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:30.327 [2024-05-15 13:45:43.310781] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190ecc78 00:27:30.327 [2024-05-15 13:45:43.312806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.327 [2024-05-15 13:45:43.313005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:30.327 [2024-05-15 13:45:43.327000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190ec408 00:27:30.327 [2024-05-15 13:45:43.328970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.327 [2024-05-15 13:45:43.329162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:30.327 [2024-05-15 13:45:43.343186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190ebb98 00:27:30.327 [2024-05-15 13:45:43.345144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.327 [2024-05-15 13:45:43.345342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:30.327 [2024-05-15 13:45:43.359250] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190eb328 00:27:30.327 [2024-05-15 13:45:43.361199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.327 [2024-05-15 13:45:43.361428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:30.327 [2024-05-15 13:45:43.375414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190eaab8 00:27:30.327 [2024-05-15 13:45:43.377344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.327 [2024-05-15 13:45:43.377529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:30.327 [2024-05-15 13:45:43.391482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190ea248 00:27:30.328 [2024-05-15 13:45:43.393410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.328 [2024-05-15 13:45:43.393597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:30.328 [2024-05-15 13:45:43.407432] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e99d8 00:27:30.328 [2024-05-15 13:45:43.409348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.328 [2024-05-15 13:45:43.409545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:30.328 [2024-05-15 13:45:43.423356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e9168 00:27:30.595 [2024-05-15 13:45:43.425214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.595 [2024-05-15 13:45:43.425412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:30.595 [2024-05-15 13:45:43.439383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e88f8 00:27:30.595 [2024-05-15 13:45:43.441289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.595 [2024-05-15 13:45:43.441470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:30.595 [2024-05-15 13:45:43.455532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e8088 00:27:30.596 [2024-05-15 13:45:43.457335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.596 [2024-05-15 13:45:43.457522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:30.596 [2024-05-15 13:45:43.471617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e7818 00:27:30.596 [2024-05-15 13:45:43.473414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.596 [2024-05-15 13:45:43.473611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:30.596 [2024-05-15 13:45:43.487380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e6fa8 00:27:30.596 [2024-05-15 13:45:43.489078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.596 [2024-05-15 13:45:43.489263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:30.596 [2024-05-15 13:45:43.502924] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e6738 00:27:30.596 [2024-05-15 13:45:43.504668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.596 [2024-05-15 13:45:43.504845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:30.596 [2024-05-15 13:45:43.518832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e5ec8 00:27:30.596 [2024-05-15 13:45:43.520561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.596 [2024-05-15 13:45:43.520738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:30.596 [2024-05-15 13:45:43.534874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e5658 00:27:30.596 [2024-05-15 13:45:43.536644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.596 [2024-05-15 13:45:43.536872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:30.596 [2024-05-15 13:45:43.551593] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e4de8 00:27:30.596 [2024-05-15 13:45:43.553300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.596 [2024-05-15 13:45:43.553499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:30.596 [2024-05-15 13:45:43.567280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e4578 00:27:30.596 [2024-05-15 13:45:43.568977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.596 [2024-05-15 13:45:43.569164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:30.596 [2024-05-15 13:45:43.583368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e3d08 00:27:30.596 [2024-05-15 13:45:43.585013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.596 [2024-05-15 13:45:43.585207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:30.596 [2024-05-15 13:45:43.599739] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e3498 00:27:30.596 [2024-05-15 13:45:43.601428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.596 [2024-05-15 13:45:43.601631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:30.596 [2024-05-15 13:45:43.615804] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e2c28 00:27:30.596 [2024-05-15 13:45:43.617461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.596 [2024-05-15 13:45:43.617647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:30.596 [2024-05-15 13:45:43.631890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e23b8 00:27:30.596 [2024-05-15 13:45:43.633512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.596 [2024-05-15 13:45:43.633695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:30.596 [2024-05-15 13:45:43.647945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e1b48 00:27:30.596 [2024-05-15 13:45:43.649546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.596 [2024-05-15 13:45:43.649742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:30.596 [2024-05-15 13:45:43.664260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e12d8 00:27:30.596 [2024-05-15 13:45:43.665862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.596 [2024-05-15 13:45:43.666113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:30.596 [2024-05-15 13:45:43.680603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e0a68 00:27:30.596 [2024-05-15 13:45:43.682212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.596 [2024-05-15 13:45:43.682424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:30.855 [2024-05-15 13:45:43.696714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e01f8 00:27:30.855 [2024-05-15 13:45:43.698294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.855 [2024-05-15 13:45:43.698485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:30.855 [2024-05-15 13:45:43.713074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190df988 00:27:30.855 [2024-05-15 13:45:43.714613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.855 [2024-05-15 13:45:43.714804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:30.855 [2024-05-15 13:45:43.728995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190df118 00:27:30.855 [2024-05-15 13:45:43.730507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.855 [2024-05-15 13:45:43.730741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:30.855 [2024-05-15 13:45:43.745075] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190de8a8 00:27:30.855 [2024-05-15 13:45:43.746601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.855 [2024-05-15 13:45:43.746788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:30.855 [2024-05-15 13:45:43.761059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190de038 00:27:30.855 [2024-05-15 13:45:43.762544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.855 [2024-05-15 13:45:43.762737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:30.855 [2024-05-15 13:45:43.783620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190de038 00:27:30.855 [2024-05-15 13:45:43.786306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.855 [2024-05-15 13:45:43.786503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:30.855 [2024-05-15 13:45:43.799616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190de8a8 00:27:30.855 [2024-05-15 13:45:43.802258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.855 [2024-05-15 13:45:43.802442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:30.855 [2024-05-15 13:45:43.815582] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190df118 00:27:30.855 [2024-05-15 13:45:43.818192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.855 [2024-05-15 13:45:43.818407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:30.856 [2024-05-15 13:45:43.831743] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190df988 00:27:30.856 [2024-05-15 13:45:43.834392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.856 [2024-05-15 13:45:43.834593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:30.856 [2024-05-15 13:45:43.847908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e01f8 00:27:30.856 [2024-05-15 13:45:43.850500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.856 [2024-05-15 13:45:43.850698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:30.856 [2024-05-15 13:45:43.864362] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e0a68 00:27:30.856 [2024-05-15 13:45:43.866984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.856 [2024-05-15 13:45:43.867187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:30.856 [2024-05-15 13:45:43.880713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e12d8 00:27:30.856 [2024-05-15 13:45:43.883319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.856 [2024-05-15 13:45:43.883521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:30.856 [2024-05-15 13:45:43.896976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e1b48 00:27:30.856 [2024-05-15 13:45:43.899574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.856 [2024-05-15 13:45:43.899781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:30.856 [2024-05-15 13:45:43.913337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e23b8 00:27:30.856 [2024-05-15 13:45:43.915943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.856 [2024-05-15 13:45:43.916150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:30.856 [2024-05-15 13:45:43.929814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e2c28 00:27:30.856 [2024-05-15 13:45:43.932354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.856 [2024-05-15 13:45:43.932546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:30.856 [2024-05-15 13:45:43.945877] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e3498 00:27:30.856 [2024-05-15 13:45:43.948331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.856 [2024-05-15 13:45:43.948527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:31.114 [2024-05-15 13:45:43.961796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e3d08 00:27:31.114 [2024-05-15 13:45:43.964229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.114 [2024-05-15 13:45:43.964415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:31.114 [2024-05-15 13:45:43.977640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e4578 00:27:31.114 [2024-05-15 13:45:43.980039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.114 [2024-05-15 13:45:43.980222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:31.114 [2024-05-15 13:45:43.993619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e4de8 00:27:31.114 [2024-05-15 13:45:43.996021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.114 [2024-05-15 13:45:43.996208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:31.114 [2024-05-15 13:45:44.009630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e5658 00:27:31.114 [2024-05-15 13:45:44.011996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.114 [2024-05-15 13:45:44.012192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:31.114 [2024-05-15 13:45:44.025600] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e5ec8 00:27:31.114 [2024-05-15 13:45:44.027973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.114 [2024-05-15 13:45:44.028151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:31.114 [2024-05-15 13:45:44.041626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e6738 00:27:31.114 [2024-05-15 13:45:44.043955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.114 [2024-05-15 13:45:44.044144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.114 [2024-05-15 13:45:44.057517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e6fa8 00:27:31.114 [2024-05-15 13:45:44.059838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.114 [2024-05-15 13:45:44.060024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:31.114 [2024-05-15 13:45:44.073554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e7818 00:27:31.114 [2024-05-15 13:45:44.075902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.114 [2024-05-15 13:45:44.076094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:31.114 [2024-05-15 13:45:44.089616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e8088 00:27:31.114 [2024-05-15 13:45:44.091935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.114 [2024-05-15 13:45:44.092130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:31.114 [2024-05-15 13:45:44.106017] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e88f8 00:27:31.114 [2024-05-15 13:45:44.108320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.114 [2024-05-15 13:45:44.108513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:31.114 [2024-05-15 13:45:44.122206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e9168 00:27:31.114 [2024-05-15 13:45:44.124505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.115 [2024-05-15 13:45:44.124695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:31.115 [2024-05-15 13:45:44.138344] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190e99d8 00:27:31.115 [2024-05-15 13:45:44.140596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.115 [2024-05-15 13:45:44.140782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:31.115 [2024-05-15 13:45:44.154380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190ea248 00:27:31.115 [2024-05-15 13:45:44.156667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.115 [2024-05-15 13:45:44.156873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:31.115 [2024-05-15 13:45:44.170689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190eaab8 00:27:31.115 [2024-05-15 13:45:44.172949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.115 [2024-05-15 13:45:44.173150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:31.115 [2024-05-15 13:45:44.186952] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190eb328 00:27:31.115 [2024-05-15 13:45:44.189141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.115 [2024-05-15 13:45:44.189339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:31.115 [2024-05-15 13:45:44.203034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190ebb98 00:27:31.115 [2024-05-15 13:45:44.205214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.115 [2024-05-15 13:45:44.205413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:31.373 [2024-05-15 13:45:44.219224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190ec408 00:27:31.373 [2024-05-15 13:45:44.221412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.373 [2024-05-15 13:45:44.221613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:31.373 [2024-05-15 13:45:44.235609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190ecc78 00:27:31.373 [2024-05-15 13:45:44.237811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.373 [2024-05-15 13:45:44.238013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:31.373 [2024-05-15 13:45:44.251881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190ed4e8 00:27:31.373 [2024-05-15 13:45:44.254066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.373 [2024-05-15 13:45:44.254272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:31.373 [2024-05-15 13:45:44.268587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190edd58 00:27:31.373 [2024-05-15 13:45:44.270716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.374 [2024-05-15 13:45:44.270905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:31.374 [2024-05-15 13:45:44.285638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190ee5c8 00:27:31.374 [2024-05-15 13:45:44.287706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.374 [2024-05-15 13:45:44.287890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:31.374 [2024-05-15 13:45:44.301918] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190eee38 00:27:31.374 [2024-05-15 13:45:44.304023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.374 [2024-05-15 13:45:44.304216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:31.374 [2024-05-15 13:45:44.318232] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190ef6a8 00:27:31.374 [2024-05-15 13:45:44.320329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.374 [2024-05-15 13:45:44.320510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:31.374 [2024-05-15 13:45:44.334394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190eff18 00:27:31.374 [2024-05-15 13:45:44.336412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.374 [2024-05-15 13:45:44.336591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:31.374 [2024-05-15 13:45:44.350379] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f0788 00:27:31.374 [2024-05-15 13:45:44.352360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.374 [2024-05-15 13:45:44.352538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:31.374 [2024-05-15 13:45:44.366385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f0ff8 00:27:31.374 [2024-05-15 13:45:44.368387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.374 [2024-05-15 13:45:44.368584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:31.374 [2024-05-15 13:45:44.382570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f1868 00:27:31.374 [2024-05-15 13:45:44.384584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.374 [2024-05-15 13:45:44.384778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:31.374 [2024-05-15 13:45:44.398880] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f20d8 00:27:31.374 [2024-05-15 13:45:44.400905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.374 [2024-05-15 13:45:44.401094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:31.374 [2024-05-15 13:45:44.415184] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f2948 00:27:31.374 [2024-05-15 13:45:44.417173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.374 [2024-05-15 13:45:44.417370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:31.374 [2024-05-15 13:45:44.431619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f31b8 00:27:31.374 [2024-05-15 13:45:44.433599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.374 [2024-05-15 13:45:44.433815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:31.374 [2024-05-15 13:45:44.447992] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f3a28 00:27:31.374 [2024-05-15 13:45:44.449932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.374 [2024-05-15 13:45:44.450126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:31.374 [2024-05-15 13:45:44.464306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f4298 00:27:31.374 [2024-05-15 13:45:44.466180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.374 [2024-05-15 13:45:44.466383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:31.633 [2024-05-15 13:45:44.480513] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f4b08 00:27:31.633 [2024-05-15 13:45:44.482389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.633 [2024-05-15 13:45:44.482580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:31.633 [2024-05-15 13:45:44.496887] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f5378 00:27:31.633 [2024-05-15 13:45:44.498764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.633 [2024-05-15 13:45:44.498962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:31.633 [2024-05-15 13:45:44.513145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f5be8 00:27:31.633 [2024-05-15 13:45:44.514966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.633 [2024-05-15 13:45:44.515167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:31.633 [2024-05-15 13:45:44.529396] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f6458 00:27:31.633 [2024-05-15 13:45:44.531229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.633 [2024-05-15 13:45:44.531422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:31.633 [2024-05-15 13:45:44.545712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f6cc8 00:27:31.633 [2024-05-15 13:45:44.547528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.633 [2024-05-15 13:45:44.547721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:31.633 [2024-05-15 13:45:44.561975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f7538 00:27:31.633 [2024-05-15 13:45:44.563728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.633 [2024-05-15 13:45:44.563911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:31.633 [2024-05-15 13:45:44.578104] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f7da8 00:27:31.633 [2024-05-15 13:45:44.579839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.633 [2024-05-15 13:45:44.580024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:31.633 [2024-05-15 13:45:44.594110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f8618 00:27:31.633 [2024-05-15 13:45:44.595813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.633 [2024-05-15 13:45:44.596017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:31.633 [2024-05-15 13:45:44.610196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f8e88 00:27:31.633 [2024-05-15 13:45:44.611869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.633 [2024-05-15 13:45:44.612050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:31.633 [2024-05-15 13:45:44.626248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f96f8 00:27:31.633 [2024-05-15 13:45:44.627867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.633 [2024-05-15 13:45:44.628042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:31.633 [2024-05-15 13:45:44.641997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190f9f68 00:27:31.633 [2024-05-15 13:45:44.643609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.633 [2024-05-15 13:45:44.643769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:31.633 [2024-05-15 13:45:44.657800] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190fa7d8 00:27:31.633 [2024-05-15 13:45:44.659467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.633 [2024-05-15 13:45:44.659655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:31.633 [2024-05-15 13:45:44.674139] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190fb048 00:27:31.633 [2024-05-15 13:45:44.675831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.633 [2024-05-15 13:45:44.676033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:31.633 [2024-05-15 13:45:44.690540] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190fb8b8 00:27:31.633 [2024-05-15 13:45:44.692154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.633 [2024-05-15 13:45:44.692364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:31.633 [2024-05-15 13:45:44.706907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190fc128 00:27:31.633 [2024-05-15 13:45:44.708570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.633 [2024-05-15 13:45:44.708763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:31.633 [2024-05-15 13:45:44.723353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22fc290) with pdu=0x2000190fc998 00:27:31.633 [2024-05-15 13:45:44.724887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.633 [2024-05-15 13:45:44.725076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:31.633 00:27:31.633 Latency(us) 00:27:31.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:31.633 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:31.633 nvme0n1 : 2.01 15632.47 61.06 0.00 0.00 8181.20 2231.34 31831.77 00:27:31.633 =================================================================================================================== 00:27:31.634 Total : 15632.47 61.06 0.00 0.00 8181.20 2231.34 31831.77 00:27:31.634 0 00:27:31.890 13:45:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:31.891 13:45:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:31.891 13:45:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:31.891 13:45:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:31.891 | .driver_specific 00:27:31.891 | .nvme_error 00:27:31.891 | .status_code 00:27:31.891 | .command_transient_transport_error' 00:27:32.149 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 123 > 0 )) 00:27:32.149 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95896 00:27:32.149 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 95896 ']' 00:27:32.149 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 95896 00:27:32.149 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:32.149 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:32.149 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95896 00:27:32.149 killing process with pid 95896 00:27:32.149 Received shutdown signal, test time was about 2.000000 seconds 00:27:32.149 00:27:32.149 Latency(us) 00:27:32.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:32.149 =================================================================================================================== 00:27:32.149 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:32.149 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:32.149 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:32.149 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95896' 00:27:32.149 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 95896 00:27:32.149 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 95896 00:27:32.407 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:32.407 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:32.407 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:32.407 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:32.407 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:32.407 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95944 00:27:32.407 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95944 /var/tmp/bperf.sock 00:27:32.407 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:32.407 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 95944 ']' 00:27:32.407 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:32.407 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:32.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:32.407 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:32.407 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:32.407 13:45:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:32.407 [2024-05-15 13:45:45.354949] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:27:32.407 [2024-05-15 13:45:45.355308] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95944 ] 00:27:32.407 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:32.407 Zero copy mechanism will not be used. 00:27:32.407 [2024-05-15 13:45:45.483926] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:32.407 [2024-05-15 13:45:45.503691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.665 [2024-05-15 13:45:45.563507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.229 13:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:33.229 13:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:33.229 13:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:33.229 13:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:33.794 13:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:33.794 13:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.794 13:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:33.794 13:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.794 13:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:33.794 13:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:34.053 nvme0n1 00:27:34.053 13:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:34.053 13:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.053 13:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:34.053 13:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.053 13:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:34.053 13:45:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:34.053 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:34.053 Zero copy mechanism will not be used. 00:27:34.053 Running I/O for 2 seconds... 00:27:34.053 [2024-05-15 13:45:47.151380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.053 [2024-05-15 13:45:47.152010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.053 [2024-05-15 13:45:47.152221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.312 [2024-05-15 13:45:47.156746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.312 [2024-05-15 13:45:47.157064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.312 [2024-05-15 13:45:47.157394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.312 [2024-05-15 13:45:47.162140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.312 [2024-05-15 13:45:47.162477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.312 [2024-05-15 13:45:47.162789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.312 [2024-05-15 13:45:47.167738] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.312 [2024-05-15 13:45:47.168058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.312 [2024-05-15 13:45:47.168283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.312 [2024-05-15 13:45:47.173303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.312 [2024-05-15 13:45:47.173597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.312 [2024-05-15 13:45:47.173794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.312 [2024-05-15 13:45:47.178814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.312 [2024-05-15 13:45:47.179102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.312 [2024-05-15 13:45:47.179303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.312 [2024-05-15 13:45:47.184334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.312 [2024-05-15 13:45:47.184622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.312 [2024-05-15 13:45:47.184823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.312 [2024-05-15 13:45:47.189757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.312 [2024-05-15 13:45:47.190030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.312 [2024-05-15 13:45:47.190331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.312 [2024-05-15 13:45:47.195159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.312 [2024-05-15 13:45:47.195476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.312 [2024-05-15 13:45:47.195739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.312 [2024-05-15 13:45:47.200674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.312 [2024-05-15 13:45:47.200975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.312 [2024-05-15 13:45:47.201250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.312 [2024-05-15 13:45:47.206149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.312 [2024-05-15 13:45:47.206458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.312 [2024-05-15 13:45:47.206651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.312 [2024-05-15 13:45:47.211708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.312 [2024-05-15 13:45:47.212022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.312 [2024-05-15 13:45:47.212320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.312 [2024-05-15 13:45:47.217261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.312 [2024-05-15 13:45:47.217563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.312 [2024-05-15 13:45:47.217740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.312 [2024-05-15 13:45:47.222734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.312 [2024-05-15 13:45:47.223015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.312 [2024-05-15 13:45:47.223204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.312 [2024-05-15 13:45:47.228188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.312 [2024-05-15 13:45:47.228486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.312 [2024-05-15 13:45:47.228695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.312 [2024-05-15 13:45:47.233645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.312 [2024-05-15 13:45:47.233949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.312 [2024-05-15 13:45:47.234226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.312 [2024-05-15 13:45:47.239141] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.312 [2024-05-15 13:45:47.239463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.312 [2024-05-15 13:45:47.239688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.312 [2024-05-15 13:45:47.244551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.312 [2024-05-15 13:45:47.244836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.312 [2024-05-15 13:45:47.245033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.312 [2024-05-15 13:45:47.250107] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.312 [2024-05-15 13:45:47.250416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.312 [2024-05-15 13:45:47.250603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.255392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.255710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.255903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.260847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.261135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.261353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.266301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.266596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.266755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.271685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.271943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.272125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.277058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.277352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.277645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.282441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.282707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.282877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.287809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.288079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.288343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.293136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.293416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.293588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.298565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.298836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.299003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.303905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.304163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.304363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.309060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.309349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.309562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.314312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.314567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.314727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.319681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.319946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.320104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.325057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.325349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.325524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.331230] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.331529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.331690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.337547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.337816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.337971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.342868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.343127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.343298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.348303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.348559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.348724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.354466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.354755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.354936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.359909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.360221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.360494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.365407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.365680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.365889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.370779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.371039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.371355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.376119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.376398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.376616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.381481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.381745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.381948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.386812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.387088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.387262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.392223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.392508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.392660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.397647] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.397947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.398107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.313 [2024-05-15 13:45:47.403092] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.313 [2024-05-15 13:45:47.403402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.313 [2024-05-15 13:45:47.403568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.314 [2024-05-15 13:45:47.408549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.314 [2024-05-15 13:45:47.408816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.314 [2024-05-15 13:45:47.408982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.572 [2024-05-15 13:45:47.413936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.572 [2024-05-15 13:45:47.414178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.572 [2024-05-15 13:45:47.414354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.572 [2024-05-15 13:45:47.419295] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.572 [2024-05-15 13:45:47.419566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.572 [2024-05-15 13:45:47.419716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.572 [2024-05-15 13:45:47.424680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.572 [2024-05-15 13:45:47.424969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.572 [2024-05-15 13:45:47.425214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.572 [2024-05-15 13:45:47.430185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.572 [2024-05-15 13:45:47.430488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.572 [2024-05-15 13:45:47.430662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.572 [2024-05-15 13:45:47.435616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.572 [2024-05-15 13:45:47.435912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.436086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.441040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.441318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.441473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.446409] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.446675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.446840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.451818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.452084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.452269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.457175] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.457462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.457772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.462321] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.462602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.462765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.467599] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.467902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.468078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.472913] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.473158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.473340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.478263] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.478529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.478678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.483582] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.483844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.483996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.488976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.489244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.489436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.494364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.494628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.494833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.499722] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.499978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.500136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.505041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.505327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.505491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.510431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.510714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.510929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.515779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.516093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.516324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.520911] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.521172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.521405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.526356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.526674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.526920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.531751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.532031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.532074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.537038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.537316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.537523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.542412] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.542663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.542827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.547650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.547915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.548113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.552597] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.552859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.553092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.557819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.558067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.558309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.563060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.563319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.563582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.568247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.568497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.568649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.573160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.573419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.573630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.577909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.573 [2024-05-15 13:45:47.578157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.573 [2024-05-15 13:45:47.578355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.573 [2024-05-15 13:45:47.583102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.574 [2024-05-15 13:45:47.583387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.574 [2024-05-15 13:45:47.583627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.574 [2024-05-15 13:45:47.588277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.574 [2024-05-15 13:45:47.588539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.574 [2024-05-15 13:45:47.588773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.574 [2024-05-15 13:45:47.593467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.574 [2024-05-15 13:45:47.593736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.574 [2024-05-15 13:45:47.593978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.574 [2024-05-15 13:45:47.598763] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.574 [2024-05-15 13:45:47.599026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.574 [2024-05-15 13:45:47.599205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.574 [2024-05-15 13:45:47.603958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.574 [2024-05-15 13:45:47.604247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.574 [2024-05-15 13:45:47.604421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.574 [2024-05-15 13:45:47.609319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.574 [2024-05-15 13:45:47.609582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.574 [2024-05-15 13:45:47.609746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.574 [2024-05-15 13:45:47.614722] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.574 [2024-05-15 13:45:47.614981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.574 [2024-05-15 13:45:47.615144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.574 [2024-05-15 13:45:47.620077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.574 [2024-05-15 13:45:47.620360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.574 [2024-05-15 13:45:47.620521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.574 [2024-05-15 13:45:47.625415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.574 [2024-05-15 13:45:47.625686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.574 [2024-05-15 13:45:47.625996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.574 [2024-05-15 13:45:47.630728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.574 [2024-05-15 13:45:47.630988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.574 [2024-05-15 13:45:47.631138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.574 [2024-05-15 13:45:47.636003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.574 [2024-05-15 13:45:47.636245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.574 [2024-05-15 13:45:47.636417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.574 [2024-05-15 13:45:47.641289] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.574 [2024-05-15 13:45:47.641533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.574 [2024-05-15 13:45:47.641676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.574 [2024-05-15 13:45:47.646591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.574 [2024-05-15 13:45:47.646878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.574 [2024-05-15 13:45:47.647029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.574 [2024-05-15 13:45:47.651874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.574 [2024-05-15 13:45:47.652187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.574 [2024-05-15 13:45:47.652350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.574 [2024-05-15 13:45:47.656975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.574 [2024-05-15 13:45:47.657236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.574 [2024-05-15 13:45:47.657421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.574 [2024-05-15 13:45:47.662420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.574 [2024-05-15 13:45:47.662674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.574 [2024-05-15 13:45:47.662840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.574 [2024-05-15 13:45:47.667754] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.574 [2024-05-15 13:45:47.668029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.574 [2024-05-15 13:45:47.668329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.833 [2024-05-15 13:45:47.673060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.833 [2024-05-15 13:45:47.673331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.833 [2024-05-15 13:45:47.673481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.833 [2024-05-15 13:45:47.678390] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.833 [2024-05-15 13:45:47.678641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.833 [2024-05-15 13:45:47.678804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.833 [2024-05-15 13:45:47.683660] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.833 [2024-05-15 13:45:47.683918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.833 [2024-05-15 13:45:47.684107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.833 [2024-05-15 13:45:47.688932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.833 [2024-05-15 13:45:47.689193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.833 [2024-05-15 13:45:47.689374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.833 [2024-05-15 13:45:47.694273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.833 [2024-05-15 13:45:47.694526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.833 [2024-05-15 13:45:47.694676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.833 [2024-05-15 13:45:47.699522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.833 [2024-05-15 13:45:47.699772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.833 [2024-05-15 13:45:47.699931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.833 [2024-05-15 13:45:47.704861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.833 [2024-05-15 13:45:47.705152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.833 [2024-05-15 13:45:47.705353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.833 [2024-05-15 13:45:47.710155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.833 [2024-05-15 13:45:47.710421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.833 [2024-05-15 13:45:47.710574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.833 [2024-05-15 13:45:47.715461] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.833 [2024-05-15 13:45:47.715709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.833 [2024-05-15 13:45:47.716000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.833 [2024-05-15 13:45:47.720753] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.833 [2024-05-15 13:45:47.720993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.833 [2024-05-15 13:45:47.721153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.833 [2024-05-15 13:45:47.726133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.833 [2024-05-15 13:45:47.726444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.726725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.731774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.732109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.732292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.737294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.737610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.737796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.742768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.743073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.743263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.748184] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.748535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.748724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.753606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.753915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.754084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.758974] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.759282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.759496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.764335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.764617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.764812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.769750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.770065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.770249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.775152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.775474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.775662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.780580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.780904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.781128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.786118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.786465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.786681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.791153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.791576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.791741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.796134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.796644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.796818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.801387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.801661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.801905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.806797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.807068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.807290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.811987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.812282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.812462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.817368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.817615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.817795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.822753] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.823013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.823170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.828002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.828286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.828469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.833314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.833572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.833709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.838550] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.838844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.839000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.843928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.844173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.844355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.849191] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.849463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.849663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.854556] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.854803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.854974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.859653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.859903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.860059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.864652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.864897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.865045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.870005] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.870308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.834 [2024-05-15 13:45:47.870510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.834 [2024-05-15 13:45:47.875354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.834 [2024-05-15 13:45:47.875656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.835 [2024-05-15 13:45:47.875808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.835 [2024-05-15 13:45:47.880504] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.835 [2024-05-15 13:45:47.880797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.835 [2024-05-15 13:45:47.880942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.835 [2024-05-15 13:45:47.885924] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.835 [2024-05-15 13:45:47.886179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.835 [2024-05-15 13:45:47.886380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.835 [2024-05-15 13:45:47.890943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.835 [2024-05-15 13:45:47.891196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.835 [2024-05-15 13:45:47.891385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.835 [2024-05-15 13:45:47.896316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.835 [2024-05-15 13:45:47.896559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.835 [2024-05-15 13:45:47.896867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.835 [2024-05-15 13:45:47.901579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.835 [2024-05-15 13:45:47.901872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.835 [2024-05-15 13:45:47.902032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.835 [2024-05-15 13:45:47.906881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.835 [2024-05-15 13:45:47.907116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.835 [2024-05-15 13:45:47.907273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:34.835 [2024-05-15 13:45:47.912187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.835 [2024-05-15 13:45:47.912448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.835 [2024-05-15 13:45:47.912593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:34.835 [2024-05-15 13:45:47.917490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.835 [2024-05-15 13:45:47.917749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.835 [2024-05-15 13:45:47.917892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:34.835 [2024-05-15 13:45:47.922780] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.835 [2024-05-15 13:45:47.923011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.835 [2024-05-15 13:45:47.923184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:34.835 [2024-05-15 13:45:47.927974] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:34.835 [2024-05-15 13:45:47.928212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.835 [2024-05-15 13:45:47.928381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.094 [2024-05-15 13:45:47.933068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.094 [2024-05-15 13:45:47.933298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-05-15 13:45:47.933467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.094 [2024-05-15 13:45:47.938059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.094 [2024-05-15 13:45:47.938323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.094 [2024-05-15 13:45:47.938564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.094 [2024-05-15 13:45:47.943173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.094 [2024-05-15 13:45:47.943469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:47.943697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:47.948447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:47.948728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:47.948992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:47.953785] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:47.954076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:47.954249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:47.959194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:47.959530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:47.959678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:47.964195] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:47.964486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:47.964637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:47.969550] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:47.969872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:47.970034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:47.975073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:47.975420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:47.975627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:47.980511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:47.980835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:47.981040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:47.985664] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:47.985995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:47.986225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:47.990654] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:47.990922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:47.991072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:47.995946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:47.996212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:47.996397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:48.001347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:48.001628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:48.001882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:48.006756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:48.007057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:48.007250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:48.012088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:48.012364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:48.012509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:48.017369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:48.017681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:48.017927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:48.022630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:48.022901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:48.023078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:48.027894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:48.028148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:48.028314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:48.033208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:48.033491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:48.033650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:48.038604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:48.038859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:48.039010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:48.043785] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:48.044040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:48.044307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:48.049100] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:48.049367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:48.049654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:48.054436] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:48.054705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:48.054866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:48.059750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:48.060036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:48.060196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:48.065004] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:48.065293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:48.065505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:48.070219] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:48.070537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:48.070687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:48.075481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:48.075748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:48.075914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:48.080810] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:48.081074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:48.081262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:48.086074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:48.086342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.095 [2024-05-15 13:45:48.086633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.095 [2024-05-15 13:45:48.091038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.095 [2024-05-15 13:45:48.091303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-05-15 13:45:48.091472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.096 [2024-05-15 13:45:48.095942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.096 [2024-05-15 13:45:48.096197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-05-15 13:45:48.096366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.096 [2024-05-15 13:45:48.101083] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.096 [2024-05-15 13:45:48.101371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-05-15 13:45:48.101611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.096 [2024-05-15 13:45:48.106474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.096 [2024-05-15 13:45:48.106784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-05-15 13:45:48.107039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.096 [2024-05-15 13:45:48.111771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.096 [2024-05-15 13:45:48.112070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-05-15 13:45:48.112325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.096 [2024-05-15 13:45:48.116656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.096 [2024-05-15 13:45:48.116897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-05-15 13:45:48.117042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.096 [2024-05-15 13:45:48.121618] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.096 [2024-05-15 13:45:48.121911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-05-15 13:45:48.122055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.096 [2024-05-15 13:45:48.126365] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.096 [2024-05-15 13:45:48.126735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-05-15 13:45:48.126995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.096 [2024-05-15 13:45:48.131166] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.096 [2024-05-15 13:45:48.131623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-05-15 13:45:48.131779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.096 [2024-05-15 13:45:48.136087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.096 [2024-05-15 13:45:48.136350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-05-15 13:45:48.136510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.096 [2024-05-15 13:45:48.141198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.096 [2024-05-15 13:45:48.141493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-05-15 13:45:48.141644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.096 [2024-05-15 13:45:48.146643] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.096 [2024-05-15 13:45:48.146913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-05-15 13:45:48.147083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.096 [2024-05-15 13:45:48.152007] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.096 [2024-05-15 13:45:48.152283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-05-15 13:45:48.152521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.096 [2024-05-15 13:45:48.157414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.096 [2024-05-15 13:45:48.157687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-05-15 13:45:48.157858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.096 [2024-05-15 13:45:48.162758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.096 [2024-05-15 13:45:48.162999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-05-15 13:45:48.163304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.096 [2024-05-15 13:45:48.168036] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.096 [2024-05-15 13:45:48.168309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-05-15 13:45:48.168548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.096 [2024-05-15 13:45:48.173489] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.096 [2024-05-15 13:45:48.173774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-05-15 13:45:48.173939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.096 [2024-05-15 13:45:48.178943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.096 [2024-05-15 13:45:48.179212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-05-15 13:45:48.179396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.096 [2024-05-15 13:45:48.184313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.096 [2024-05-15 13:45:48.184570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-05-15 13:45:48.184850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.096 [2024-05-15 13:45:48.189712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.096 [2024-05-15 13:45:48.189997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.096 [2024-05-15 13:45:48.190151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.434 [2024-05-15 13:45:48.195166] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.434 [2024-05-15 13:45:48.195453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.434 [2024-05-15 13:45:48.195614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.434 [2024-05-15 13:45:48.200487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.434 [2024-05-15 13:45:48.200769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.434 [2024-05-15 13:45:48.200921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.434 [2024-05-15 13:45:48.205904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.434 [2024-05-15 13:45:48.206159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.434 [2024-05-15 13:45:48.206329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.434 [2024-05-15 13:45:48.211322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.434 [2024-05-15 13:45:48.211613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.434 [2024-05-15 13:45:48.211820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.434 [2024-05-15 13:45:48.216666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.434 [2024-05-15 13:45:48.216935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.434 [2024-05-15 13:45:48.217092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.434 [2024-05-15 13:45:48.221878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.434 [2024-05-15 13:45:48.222114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.434 [2024-05-15 13:45:48.222292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.434 [2024-05-15 13:45:48.227200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.434 [2024-05-15 13:45:48.227508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.434 [2024-05-15 13:45:48.227669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.434 [2024-05-15 13:45:48.232642] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.434 [2024-05-15 13:45:48.232956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.434 [2024-05-15 13:45:48.233119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.434 [2024-05-15 13:45:48.237986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.434 [2024-05-15 13:45:48.238250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.434 [2024-05-15 13:45:48.238403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.434 [2024-05-15 13:45:48.243242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.434 [2024-05-15 13:45:48.243500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.434 [2024-05-15 13:45:48.243731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.434 [2024-05-15 13:45:48.248179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.434 [2024-05-15 13:45:48.248471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.434 [2024-05-15 13:45:48.248650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.434 [2024-05-15 13:45:48.253174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.434 [2024-05-15 13:45:48.253415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.434 [2024-05-15 13:45:48.253572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.434 [2024-05-15 13:45:48.258454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.434 [2024-05-15 13:45:48.258691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.434 [2024-05-15 13:45:48.258888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.434 [2024-05-15 13:45:48.263784] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.434 [2024-05-15 13:45:48.264021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.434 [2024-05-15 13:45:48.264178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.434 [2024-05-15 13:45:48.269018] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.434 [2024-05-15 13:45:48.269246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.434 [2024-05-15 13:45:48.269427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.434 [2024-05-15 13:45:48.274256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.434 [2024-05-15 13:45:48.274515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.274743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.279409] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.279668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.279819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.284812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.285066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.285311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.290114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.290394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.290599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.295391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.295637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.295791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.300522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.300782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.300931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.305665] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.306072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.306347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.311197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.311464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.311625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.316528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.316790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.316942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.321529] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.321836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.321981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.326148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.326541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.326793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.330897] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.331355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.331502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.335993] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.336237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.336485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.341295] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.341548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.341707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.346525] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.346747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.346895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.351398] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.351641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.351823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.357095] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.357382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.357543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.363204] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.363488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.363639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.368575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.368866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.369022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.373778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.374046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.374206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.379660] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.379921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.380079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.385029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.385294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.385460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.390384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.390633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.390809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.395722] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.395977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.396134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.400699] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.400942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.401091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.405561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.405849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.405996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.410839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.411095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.411276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.416160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.416439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-05-15 13:45:48.416636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.435 [2024-05-15 13:45:48.421434] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.435 [2024-05-15 13:45:48.421695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.436 [2024-05-15 13:45:48.421974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.436 [2024-05-15 13:45:48.426571] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.436 [2024-05-15 13:45:48.426868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.436 [2024-05-15 13:45:48.427042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.436 [2024-05-15 13:45:48.431771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.436 [2024-05-15 13:45:48.432071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.436 [2024-05-15 13:45:48.432228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.436 [2024-05-15 13:45:48.437128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.436 [2024-05-15 13:45:48.437411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.436 [2024-05-15 13:45:48.437559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.436 [2024-05-15 13:45:48.442495] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.436 [2024-05-15 13:45:48.442770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.436 [2024-05-15 13:45:48.443062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.436 [2024-05-15 13:45:48.447624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.436 [2024-05-15 13:45:48.447865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.436 [2024-05-15 13:45:48.448008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.436 [2024-05-15 13:45:48.452268] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.436 [2024-05-15 13:45:48.452543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.436 [2024-05-15 13:45:48.452683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.436 [2024-05-15 13:45:48.456842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.436 [2024-05-15 13:45:48.457094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.436 [2024-05-15 13:45:48.457247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.436 [2024-05-15 13:45:48.461986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.436 [2024-05-15 13:45:48.462319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.436 [2024-05-15 13:45:48.462603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.436 [2024-05-15 13:45:48.467419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.436 [2024-05-15 13:45:48.467737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.436 [2024-05-15 13:45:48.467922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.436 [2024-05-15 13:45:48.472859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.436 [2024-05-15 13:45:48.473198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.436 [2024-05-15 13:45:48.473372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.436 [2024-05-15 13:45:48.477876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.436 [2024-05-15 13:45:48.478168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.436 [2024-05-15 13:45:48.478463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.436 [2024-05-15 13:45:48.482755] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.436 [2024-05-15 13:45:48.483039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.436 [2024-05-15 13:45:48.483202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.436 [2024-05-15 13:45:48.488017] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.436 [2024-05-15 13:45:48.488310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.436 [2024-05-15 13:45:48.488594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.436 [2024-05-15 13:45:48.492949] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.436 [2024-05-15 13:45:48.493360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.436 [2024-05-15 13:45:48.493525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.436 [2024-05-15 13:45:48.497787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.436 [2024-05-15 13:45:48.498280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.436 [2024-05-15 13:45:48.498435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.436 [2024-05-15 13:45:48.502620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.436 [2024-05-15 13:45:48.502865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.436 [2024-05-15 13:45:48.503108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.436 [2024-05-15 13:45:48.507605] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.436 [2024-05-15 13:45:48.507875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.436 [2024-05-15 13:45:48.508070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.697 [2024-05-15 13:45:48.512970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.697 [2024-05-15 13:45:48.513280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.697 [2024-05-15 13:45:48.513449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.697 [2024-05-15 13:45:48.518408] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.697 [2024-05-15 13:45:48.518660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.697 [2024-05-15 13:45:48.518931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.697 [2024-05-15 13:45:48.523670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.697 [2024-05-15 13:45:48.523909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.697 [2024-05-15 13:45:48.524053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.697 [2024-05-15 13:45:48.528800] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.697 [2024-05-15 13:45:48.529039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.697 [2024-05-15 13:45:48.529197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.697 [2024-05-15 13:45:48.533710] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.697 [2024-05-15 13:45:48.533973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.534265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.539071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.539340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.539579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.544027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.544308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.544467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.548894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.549159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.549446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.553567] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.553839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.553997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.558438] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.558705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.558867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.563410] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.563711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.563862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.568608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.568865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.569112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.573586] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.573850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.573996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.578508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.578769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.578918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.583255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.583508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.583654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.588069] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.588311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.588460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.592773] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.593002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.593157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.597554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.597809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.597957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.602237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.602490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.602646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.606949] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.607176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.607337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.611302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.611681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.611937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.615675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.616071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.616342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.620097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.620334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.620476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.624905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.625140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.625303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.629998] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.630247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.630402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.635135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.635422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.635566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.640353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.640601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.640773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.645595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.645869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.646027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.650937] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.651209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.651408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.656126] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.656433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.656673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.661291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.661560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.661730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.666330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.666573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.666726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.698 [2024-05-15 13:45:48.671046] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.698 [2024-05-15 13:45:48.671303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.698 [2024-05-15 13:45:48.671450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.675811] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.676054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.676197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.680575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.680818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.680958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.685243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.685493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.685635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.689973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.690215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.690372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.694823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.695069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.695209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.699672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.699893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.700062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.704312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.704546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.704698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.708889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.709155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.709307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.713111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.713503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.713707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.717281] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.717674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.717928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.721642] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.721882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.722041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.726470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.726725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.726877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.731250] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.731494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.731671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.736150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.736443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.736599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.741059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.741341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.741628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.745975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.746236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.746504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.751002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.751276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.751515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.756114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.756400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.756549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.761446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.761741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.761966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.766708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.766983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.767214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.771866] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.772126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.772326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.777097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.777351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.777501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.782364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.782628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.782784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.787588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.787832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.788057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.699 [2024-05-15 13:45:48.792691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.699 [2024-05-15 13:45:48.792950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.699 [2024-05-15 13:45:48.793178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.959 [2024-05-15 13:45:48.797974] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.959 [2024-05-15 13:45:48.798216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.959 [2024-05-15 13:45:48.798473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.959 [2024-05-15 13:45:48.803230] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.959 [2024-05-15 13:45:48.803534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.959 [2024-05-15 13:45:48.803682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.959 [2024-05-15 13:45:48.808620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.959 [2024-05-15 13:45:48.808890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.959 [2024-05-15 13:45:48.809055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.959 [2024-05-15 13:45:48.813978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.959 [2024-05-15 13:45:48.814325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.959 [2024-05-15 13:45:48.814488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.959 [2024-05-15 13:45:48.819469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.959 [2024-05-15 13:45:48.819779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.959 [2024-05-15 13:45:48.819944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.959 [2024-05-15 13:45:48.824891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.959 [2024-05-15 13:45:48.825192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.825368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.830326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.830619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.830782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.835716] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.835994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.836149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.841102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.841386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.841546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.846360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.846622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.846772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.851239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.851657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.851927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.856145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.856621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.856856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.861369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.861631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.861859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.866818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.867089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.867249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.872198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.872489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.872739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.877522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.877796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.877987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.882833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.883104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.883279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.888047] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.888308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.888469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.893128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.893394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.893549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.898239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.898518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.898661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.903205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.903459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.903629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.908319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.908600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.908741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.913571] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.913852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.914006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.918818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.919138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.919310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.923856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.924181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.924366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.928957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.929237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.929455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.933830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.934082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.934232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.938688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.938940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.939081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.943592] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.943849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.943989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.948566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.948805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.948960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.953385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.953609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.953794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.958033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.958265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.958403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.960 [2024-05-15 13:45:48.962585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.960 [2024-05-15 13:45:48.962815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.960 [2024-05-15 13:45:48.962966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.961 [2024-05-15 13:45:48.967153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.961 [2024-05-15 13:45:48.967418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.961 [2024-05-15 13:45:48.967557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.961 [2024-05-15 13:45:48.971805] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.961 [2024-05-15 13:45:48.972080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.961 [2024-05-15 13:45:48.972230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.961 [2024-05-15 13:45:48.976551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.961 [2024-05-15 13:45:48.976816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.961 [2024-05-15 13:45:48.976950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.961 [2024-05-15 13:45:48.981399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.961 [2024-05-15 13:45:48.981631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.961 [2024-05-15 13:45:48.981802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.961 [2024-05-15 13:45:48.985884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.961 [2024-05-15 13:45:48.986264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.961 [2024-05-15 13:45:48.986580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.961 [2024-05-15 13:45:48.990300] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.961 [2024-05-15 13:45:48.990699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.961 [2024-05-15 13:45:48.990888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.961 [2024-05-15 13:45:48.994744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.961 [2024-05-15 13:45:48.994988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.961 [2024-05-15 13:45:48.995153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.961 [2024-05-15 13:45:48.999438] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.961 [2024-05-15 13:45:48.999655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.961 [2024-05-15 13:45:48.999809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.961 [2024-05-15 13:45:49.004114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.961 [2024-05-15 13:45:49.004347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.961 [2024-05-15 13:45:49.004495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.961 [2024-05-15 13:45:49.008724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.961 [2024-05-15 13:45:49.008969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.961 [2024-05-15 13:45:49.009119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.961 [2024-05-15 13:45:49.013466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.961 [2024-05-15 13:45:49.013759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.961 [2024-05-15 13:45:49.013945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.961 [2024-05-15 13:45:49.018361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.961 [2024-05-15 13:45:49.018630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.961 [2024-05-15 13:45:49.018780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.961 [2024-05-15 13:45:49.023030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.961 [2024-05-15 13:45:49.023349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.961 [2024-05-15 13:45:49.023500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.961 [2024-05-15 13:45:49.027719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.961 [2024-05-15 13:45:49.027942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.961 [2024-05-15 13:45:49.028090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.961 [2024-05-15 13:45:49.032262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.961 [2024-05-15 13:45:49.032493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.961 [2024-05-15 13:45:49.032625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.961 [2024-05-15 13:45:49.036795] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.961 [2024-05-15 13:45:49.037019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.961 [2024-05-15 13:45:49.037184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.961 [2024-05-15 13:45:49.041485] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.961 [2024-05-15 13:45:49.041711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.961 [2024-05-15 13:45:49.041902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.961 [2024-05-15 13:45:49.045773] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.961 [2024-05-15 13:45:49.046129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.961 [2024-05-15 13:45:49.046401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.961 [2024-05-15 13:45:49.050018] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.961 [2024-05-15 13:45:49.050419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.961 [2024-05-15 13:45:49.050692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.961 [2024-05-15 13:45:49.054561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:35.961 [2024-05-15 13:45:49.054773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.961 [2024-05-15 13:45:49.054911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.220 [2024-05-15 13:45:49.059116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:36.220 [2024-05-15 13:45:49.059353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.220 [2024-05-15 13:45:49.059526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.220 [2024-05-15 13:45:49.063745] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:36.220 [2024-05-15 13:45:49.063962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.220 [2024-05-15 13:45:49.064126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.220 [2024-05-15 13:45:49.068364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:36.220 [2024-05-15 13:45:49.068586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.220 [2024-05-15 13:45:49.068753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.220 [2024-05-15 13:45:49.073020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:36.220 [2024-05-15 13:45:49.073289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.220 [2024-05-15 13:45:49.073430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.220 [2024-05-15 13:45:49.077831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:36.220 [2024-05-15 13:45:49.078078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.220 [2024-05-15 13:45:49.078223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.220 [2024-05-15 13:45:49.082739] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:36.220 [2024-05-15 13:45:49.082975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.220 [2024-05-15 13:45:49.083112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.220 [2024-05-15 13:45:49.087580] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:36.220 [2024-05-15 13:45:49.087815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.220 [2024-05-15 13:45:49.088005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.220 [2024-05-15 13:45:49.092469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:36.220 [2024-05-15 13:45:49.092708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.220 [2024-05-15 13:45:49.092893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.220 [2024-05-15 13:45:49.097443] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:36.220 [2024-05-15 13:45:49.097679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.220 [2024-05-15 13:45:49.097886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.220 [2024-05-15 13:45:49.102296] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:36.220 [2024-05-15 13:45:49.102531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.220 [2024-05-15 13:45:49.102674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.220 [2024-05-15 13:45:49.107280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:36.220 [2024-05-15 13:45:49.107503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.220 [2024-05-15 13:45:49.107696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.220 [2024-05-15 13:45:49.111998] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:36.220 [2024-05-15 13:45:49.112280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.220 [2024-05-15 13:45:49.112423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.221 [2024-05-15 13:45:49.116971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:36.221 [2024-05-15 13:45:49.117212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.221 [2024-05-15 13:45:49.117408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.221 [2024-05-15 13:45:49.121915] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:36.221 [2024-05-15 13:45:49.122217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.221 [2024-05-15 13:45:49.122438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.221 [2024-05-15 13:45:49.126683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:36.221 [2024-05-15 13:45:49.127034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.221 [2024-05-15 13:45:49.127213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:36.221 [2024-05-15 13:45:49.131407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:36.221 [2024-05-15 13:45:49.131843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.221 [2024-05-15 13:45:49.132056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:36.221 [2024-05-15 13:45:49.136213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:36.221 [2024-05-15 13:45:49.136477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.221 [2024-05-15 13:45:49.136665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:36.221 [2024-05-15 13:45:49.141267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2237fa0) with pdu=0x2000190fef90 00:27:36.221 [2024-05-15 13:45:49.141502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.221 [2024-05-15 13:45:49.141642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:36.221 00:27:36.221 Latency(us) 00:27:36.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.221 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:36.221 nvme0n1 : 2.00 5979.86 747.48 0.00 0.00 2670.29 1513.57 9175.04 00:27:36.221 =================================================================================================================== 00:27:36.221 Total : 5979.86 747.48 0.00 0.00 2670.29 1513.57 9175.04 00:27:36.221 0 00:27:36.221 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:36.221 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:36.221 | .driver_specific 00:27:36.221 | .nvme_error 00:27:36.221 | .status_code 00:27:36.221 | .command_transient_transport_error' 00:27:36.221 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:36.221 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:36.479 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 386 > 0 )) 00:27:36.479 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95944 00:27:36.479 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 95944 ']' 00:27:36.479 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 95944 00:27:36.479 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:36.479 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:36.479 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95944 00:27:36.479 killing process with pid 95944 00:27:36.479 Received shutdown signal, test time was about 2.000000 seconds 00:27:36.479 00:27:36.479 Latency(us) 00:27:36.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.480 =================================================================================================================== 00:27:36.480 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:36.480 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:36.480 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:36.480 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95944' 00:27:36.480 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 95944 00:27:36.480 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 95944 00:27:36.737 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 95763 00:27:36.737 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 95763 ']' 00:27:36.737 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 95763 00:27:36.737 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:36.737 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:36.737 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95763 00:27:36.737 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:36.737 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:36.737 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95763' 00:27:36.737 killing process with pid 95763 00:27:36.737 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 95763 00:27:36.737 [2024-05-15 13:45:49.641215] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:36.737 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 95763 00:27:36.737 00:27:36.737 real 0m16.644s 00:27:36.737 user 0m31.808s 00:27:36.737 sys 0m5.136s 00:27:36.737 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:36.737 13:45:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:36.737 ************************************ 00:27:36.737 END TEST nvmf_digest_error 00:27:36.737 ************************************ 00:27:36.994 13:45:49 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:36.994 13:45:49 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:36.994 13:45:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:36.994 13:45:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:27:36.994 13:45:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:36.994 13:45:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:27:36.995 13:45:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:36.995 13:45:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:36.995 rmmod nvme_tcp 00:27:36.995 rmmod nvme_fabrics 00:27:36.995 13:45:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:36.995 13:45:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:27:36.995 13:45:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:27:36.995 13:45:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 95763 ']' 00:27:36.995 13:45:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 95763 00:27:36.995 13:45:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 95763 ']' 00:27:36.995 13:45:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 95763 00:27:36.995 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (95763) - No such process 00:27:36.995 13:45:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 95763 is not found' 00:27:36.995 Process with pid 95763 is not found 00:27:36.995 13:45:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:36.995 13:45:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:36.995 13:45:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:36.995 13:45:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:36.995 13:45:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:36.995 13:45:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.995 13:45:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:36.995 13:45:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.995 13:45:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:36.995 00:27:36.995 real 0m34.068s 00:27:36.995 user 1m3.393s 00:27:36.995 sys 0m10.759s 00:27:36.995 13:45:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:36.995 ************************************ 00:27:36.995 END TEST nvmf_digest 00:27:36.995 ************************************ 00:27:36.995 13:45:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:36.995 13:45:50 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:27:36.995 13:45:50 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:27:36.995 13:45:50 nvmf_tcp -- nvmf/nvmf.sh@116 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:36.995 13:45:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:36.995 13:45:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:36.995 13:45:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:36.995 ************************************ 00:27:36.995 START TEST nvmf_host_multipath 00:27:36.995 ************************************ 00:27:36.995 13:45:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:37.253 * Looking for test storage... 00:27:37.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:37.253 13:45:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:37.253 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:27:37.253 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:37.253 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:37.253 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:37.253 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:37.253 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:37.253 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:37.253 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:37.253 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:37.253 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:37.253 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:37.253 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:27:37.253 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:27:37.253 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:37.253 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:37.253 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:37.253 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:37.253 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:37.253 13:45:50 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:37.254 Cannot find device "nvmf_tgt_br" 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:37.254 Cannot find device "nvmf_tgt_br2" 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:37.254 Cannot find device "nvmf_tgt_br" 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:37.254 Cannot find device "nvmf_tgt_br2" 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:37.254 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:37.513 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:37.513 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:37.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:37.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:27:37.513 00:27:37.513 --- 10.0.0.2 ping statistics --- 00:27:37.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.513 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:37.513 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:37.513 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:27:37.513 00:27:37.513 --- 10.0.0.3 ping statistics --- 00:27:37.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.513 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:37.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:37.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:27:37.513 00:27:37.513 --- 10.0.0.1 ping statistics --- 00:27:37.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.513 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:37.513 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:37.771 13:45:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:27:37.771 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:37.771 13:45:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:37.771 13:45:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:37.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.771 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=96214 00:27:37.771 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 96214 00:27:37.771 13:45:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 96214 ']' 00:27:37.771 13:45:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.771 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:37.771 13:45:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:37.771 13:45:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.771 13:45:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:37.771 13:45:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:37.771 [2024-05-15 13:45:50.697216] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:27:37.771 [2024-05-15 13:45:50.697336] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:37.771 [2024-05-15 13:45:50.827418] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:37.771 [2024-05-15 13:45:50.844149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:38.029 [2024-05-15 13:45:50.896154] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.029 [2024-05-15 13:45:50.896427] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.029 [2024-05-15 13:45:50.896494] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:38.029 [2024-05-15 13:45:50.896568] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:38.029 [2024-05-15 13:45:50.896624] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.029 [2024-05-15 13:45:50.896829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.029 [2024-05-15 13:45:50.896834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.029 13:45:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:38.029 13:45:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:27:38.029 13:45:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:38.029 13:45:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:38.029 13:45:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:38.029 13:45:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:38.029 13:45:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=96214 00:27:38.029 13:45:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:38.287 [2024-05-15 13:45:51.300074] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:38.287 13:45:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:38.544 Malloc0 00:27:38.801 13:45:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:39.059 13:45:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:39.059 13:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:39.317 [2024-05-15 13:45:52.282828] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:39.317 [2024-05-15 13:45:52.283140] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.317 13:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:39.575 [2024-05-15 13:45:52.487202] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:39.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:39.575 13:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:39.575 13:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=96258 00:27:39.575 13:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:39.575 13:45:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 96258 /var/tmp/bdevperf.sock 00:27:39.575 13:45:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 96258 ']' 00:27:39.575 13:45:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:39.575 13:45:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:39.575 13:45:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:39.575 13:45:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:39.575 13:45:52 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:27:40.571 13:45:53 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:40.571 13:45:53 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:27:40.571 13:45:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:40.845 13:45:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:41.104 Nvme0n1 00:27:41.104 13:45:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:41.361 Nvme0n1 00:27:41.361 13:45:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:27:41.361 13:45:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:42.298 13:45:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:27:42.298 13:45:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:42.863 13:45:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:43.122 13:45:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:27:43.122 13:45:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96303 00:27:43.122 13:45:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:43.122 13:45:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96214 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:49.679 13:46:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:49.679 13:46:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:49.679 13:46:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:27:49.679 13:46:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:49.679 Attaching 4 probes... 00:27:49.679 @path[10.0.0.2, 4421]: 18901 00:27:49.679 @path[10.0.0.2, 4421]: 19361 00:27:49.679 @path[10.0.0.2, 4421]: 18385 00:27:49.679 @path[10.0.0.2, 4421]: 18560 00:27:49.679 @path[10.0.0.2, 4421]: 18933 00:27:49.679 13:46:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:49.679 13:46:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:49.679 13:46:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:49.679 13:46:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:27:49.679 13:46:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:49.679 13:46:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:49.679 13:46:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96303 00:27:49.679 13:46:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:49.679 13:46:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:27:49.679 13:46:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:49.679 13:46:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:49.938 13:46:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:27:49.938 13:46:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96416 00:27:49.938 13:46:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:49.938 13:46:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96214 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:56.544 13:46:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:56.544 13:46:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:27:56.544 13:46:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:27:56.544 13:46:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:56.544 Attaching 4 probes... 00:27:56.544 @path[10.0.0.2, 4420]: 19395 00:27:56.544 @path[10.0.0.2, 4420]: 19545 00:27:56.544 @path[10.0.0.2, 4420]: 19320 00:27:56.544 @path[10.0.0.2, 4420]: 19057 00:27:56.544 @path[10.0.0.2, 4420]: 16417 00:27:56.544 13:46:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:56.544 13:46:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:56.544 13:46:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:27:56.544 13:46:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:27:56.544 13:46:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:27:56.544 13:46:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:27:56.544 13:46:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96416 00:27:56.544 13:46:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:56.544 13:46:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:27:56.544 13:46:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:56.544 13:46:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:56.802 13:46:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:27:56.802 13:46:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96528 00:27:56.802 13:46:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:27:56.802 13:46:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96214 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:03.354 13:46:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:03.354 13:46:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:03.354 13:46:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:03.354 13:46:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:03.354 Attaching 4 probes... 00:28:03.354 @path[10.0.0.2, 4421]: 14767 00:28:03.354 @path[10.0.0.2, 4421]: 16197 00:28:03.354 @path[10.0.0.2, 4421]: 16220 00:28:03.354 @path[10.0.0.2, 4421]: 16188 00:28:03.354 @path[10.0.0.2, 4421]: 16135 00:28:03.354 13:46:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:03.354 13:46:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:03.354 13:46:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:03.354 13:46:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:03.354 13:46:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:03.354 13:46:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:03.354 13:46:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96528 00:28:03.354 13:46:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:03.354 13:46:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:28:03.354 13:46:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:28:03.354 13:46:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:28:03.612 13:46:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:28:03.612 13:46:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96648 00:28:03.612 13:46:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:03.612 13:46:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96214 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:10.206 13:46:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:10.206 13:46:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:28:10.206 13:46:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:28:10.206 13:46:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:10.206 Attaching 4 probes... 00:28:10.206 00:28:10.206 00:28:10.206 00:28:10.206 00:28:10.206 00:28:10.206 13:46:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:10.206 13:46:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:10.206 13:46:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:10.206 13:46:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:28:10.206 13:46:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:28:10.206 13:46:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:28:10.206 13:46:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96648 00:28:10.206 13:46:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:10.206 13:46:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:28:10.206 13:46:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:28:10.206 13:46:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:10.464 13:46:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:28:10.464 13:46:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96762 00:28:10.464 13:46:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:10.464 13:46:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96214 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:17.023 13:46:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:17.023 13:46:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:17.023 13:46:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:17.023 13:46:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:17.023 Attaching 4 probes... 00:28:17.023 @path[10.0.0.2, 4421]: 18988 00:28:17.023 @path[10.0.0.2, 4421]: 18985 00:28:17.023 @path[10.0.0.2, 4421]: 18793 00:28:17.023 @path[10.0.0.2, 4421]: 16513 00:28:17.023 @path[10.0.0.2, 4421]: 19084 00:28:17.023 13:46:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:17.023 13:46:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:17.023 13:46:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:17.023 13:46:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:17.023 13:46:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:17.023 13:46:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:17.023 13:46:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96762 00:28:17.023 13:46:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:17.023 13:46:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:17.023 13:46:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:28:17.956 13:46:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:28:17.956 13:46:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96880 00:28:17.956 13:46:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:17.956 13:46:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96214 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:24.512 13:46:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:24.512 13:46:36 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:28:24.512 13:46:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:28:24.512 13:46:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:24.512 Attaching 4 probes... 00:28:24.512 @path[10.0.0.2, 4420]: 17878 00:28:24.512 @path[10.0.0.2, 4420]: 17133 00:28:24.512 @path[10.0.0.2, 4420]: 14948 00:28:24.512 @path[10.0.0.2, 4420]: 15194 00:28:24.512 @path[10.0.0.2, 4420]: 15284 00:28:24.512 13:46:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:24.512 13:46:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:24.512 13:46:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:24.512 13:46:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:28:24.512 13:46:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:28:24.512 13:46:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:28:24.512 13:46:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96880 00:28:24.512 13:46:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:24.512 13:46:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:24.512 [2024-05-15 13:46:37.479856] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:24.512 13:46:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:28:24.770 13:46:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:28:31.335 13:46:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:28:31.335 13:46:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97060 00:28:31.335 13:46:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:28:31.335 13:46:43 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96214 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:28:37.898 13:46:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:37.898 13:46:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:37.898 13:46:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:28:37.898 13:46:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:37.898 Attaching 4 probes... 00:28:37.898 @path[10.0.0.2, 4421]: 17347 00:28:37.898 @path[10.0.0.2, 4421]: 15428 00:28:37.899 @path[10.0.0.2, 4421]: 18389 00:28:37.899 @path[10.0.0.2, 4421]: 18209 00:28:37.899 @path[10.0.0.2, 4421]: 18285 00:28:37.899 13:46:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:37.899 13:46:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:37.899 13:46:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:28:37.899 13:46:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:28:37.899 13:46:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:37.899 13:46:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:37.899 13:46:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97060 00:28:37.899 13:46:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:37.899 13:46:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 96258 00:28:37.899 13:46:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 96258 ']' 00:28:37.899 13:46:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 96258 00:28:37.899 13:46:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:28:37.899 13:46:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:37.899 13:46:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 96258 00:28:37.899 13:46:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:28:37.899 13:46:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:28:37.899 13:46:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 96258' 00:28:37.899 killing process with pid 96258 00:28:37.899 13:46:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 96258 00:28:37.899 13:46:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 96258 00:28:37.899 Connection closed with partial response: 00:28:37.899 00:28:37.899 00:28:37.899 13:46:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 96258 00:28:37.899 13:46:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:37.899 [2024-05-15 13:45:52.542956] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:28:37.899 [2024-05-15 13:45:52.543163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96258 ] 00:28:37.899 [2024-05-15 13:45:52.667279] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:37.899 [2024-05-15 13:45:52.685788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.899 [2024-05-15 13:45:52.772546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:37.899 Running I/O for 90 seconds... 00:28:37.899 [2024-05-15 13:46:02.806449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.899 [2024-05-15 13:46:02.806539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.806596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.899 [2024-05-15 13:46:02.806613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.806636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.899 [2024-05-15 13:46:02.806651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.806673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.899 [2024-05-15 13:46:02.806688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.806710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.899 [2024-05-15 13:46:02.806725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.806747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.899 [2024-05-15 13:46:02.806762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.806783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.899 [2024-05-15 13:46:02.806798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.806819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.899 [2024-05-15 13:46:02.806834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.806855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.899 [2024-05-15 13:46:02.806870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.806892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.899 [2024-05-15 13:46:02.806923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.806957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.899 [2024-05-15 13:46:02.806973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.806995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.899 [2024-05-15 13:46:02.807010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.807033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.899 [2024-05-15 13:46:02.807048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.807081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.899 [2024-05-15 13:46:02.807096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.807118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.899 [2024-05-15 13:46:02.807134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.807155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.899 [2024-05-15 13:46:02.807170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.807192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.899 [2024-05-15 13:46:02.807207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.807229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.899 [2024-05-15 13:46:02.807244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.807278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.899 [2024-05-15 13:46:02.807294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.807316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.899 [2024-05-15 13:46:02.807331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.807353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.899 [2024-05-15 13:46:02.807368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.807390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.899 [2024-05-15 13:46:02.807405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.807427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.899 [2024-05-15 13:46:02.807449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.807470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.899 [2024-05-15 13:46:02.807486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.807507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.899 [2024-05-15 13:46:02.807522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.807544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.899 [2024-05-15 13:46:02.807559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.807581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.899 [2024-05-15 13:46:02.807596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.807618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.899 [2024-05-15 13:46:02.807634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:37.899 [2024-05-15 13:46:02.807655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.899 [2024-05-15 13:46:02.807671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.807693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.900 [2024-05-15 13:46:02.807708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.807730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.900 [2024-05-15 13:46:02.807746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.807768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.900 [2024-05-15 13:46:02.807783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.807809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.900 [2024-05-15 13:46:02.807824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.807846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.900 [2024-05-15 13:46:02.807862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.807883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.900 [2024-05-15 13:46:02.807904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.807926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.900 [2024-05-15 13:46:02.807941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.807963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.900 [2024-05-15 13:46:02.807978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.900 [2024-05-15 13:46:02.808016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.900 [2024-05-15 13:46:02.808052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.900 [2024-05-15 13:46:02.808090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.900 [2024-05-15 13:46:02.808126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.900 [2024-05-15 13:46:02.808163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.900 [2024-05-15 13:46:02.808201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.900 [2024-05-15 13:46:02.808249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.900 [2024-05-15 13:46:02.808303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.900 [2024-05-15 13:46:02.808342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.900 [2024-05-15 13:46:02.808387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.900 [2024-05-15 13:46:02.808436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.900 [2024-05-15 13:46:02.808474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.900 [2024-05-15 13:46:02.808511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.900 [2024-05-15 13:46:02.808548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.900 [2024-05-15 13:46:02.808585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.900 [2024-05-15 13:46:02.808624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.900 [2024-05-15 13:46:02.808661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.900 [2024-05-15 13:46:02.808698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.900 [2024-05-15 13:46:02.808735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.900 [2024-05-15 13:46:02.808789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.900 [2024-05-15 13:46:02.808826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.900 [2024-05-15 13:46:02.808864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.900 [2024-05-15 13:46:02.808910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.900 [2024-05-15 13:46:02.808947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.808969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.900 [2024-05-15 13:46:02.808984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.809007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.900 [2024-05-15 13:46:02.809022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.809044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.900 [2024-05-15 13:46:02.809059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.809081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.900 [2024-05-15 13:46:02.809097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.809118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.900 [2024-05-15 13:46:02.809133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.809155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.900 [2024-05-15 13:46:02.809171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.809193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.900 [2024-05-15 13:46:02.809208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:37.900 [2024-05-15 13:46:02.809230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.900 [2024-05-15 13:46:02.809246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.809277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.809294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.809315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.809331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.809359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.809374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.809396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.809412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.809434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.809449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.809471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.809486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.809509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.809524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.809546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.809561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.809583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.809598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.809620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.809636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.809657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.809673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.809695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.809711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.809732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.809757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.809796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.809812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.809835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.809857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.809880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.809896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.809919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.809935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.809957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.809973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.809996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.810012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.810037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.901 [2024-05-15 13:46:02.810054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.810076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.901 [2024-05-15 13:46:02.810092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.810115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.901 [2024-05-15 13:46:02.810131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.810153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.901 [2024-05-15 13:46:02.810169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.810192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.901 [2024-05-15 13:46:02.810207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.810230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.901 [2024-05-15 13:46:02.810246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.810278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.901 [2024-05-15 13:46:02.810294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.810316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.901 [2024-05-15 13:46:02.810338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.810361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.901 [2024-05-15 13:46:02.810378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.810400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.901 [2024-05-15 13:46:02.810416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.810438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.901 [2024-05-15 13:46:02.810454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.810477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.901 [2024-05-15 13:46:02.810492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.810518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.810534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.810557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.810573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.810596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.810611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.810634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.810650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.810672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.810688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.810711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.810727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.810749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.810765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:37.901 [2024-05-15 13:46:02.812291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.901 [2024-05-15 13:46:02.812337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:02.812366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:02.812383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:02.812405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:02.812421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:02.812443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:02.812458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:02.812480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:02.812496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:02.812518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:02.812534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:02.812556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:02.812571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:02.812593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:02.812608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:02.812967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:02.812988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:02.813012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:02.813028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:02.813050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:02.813066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:02.813088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:02.813103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:02.813125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:02.813140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:02.813170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:02.813186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:02.813208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:02.813223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:02.813257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:02.813274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:02.813404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:103584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:02.813423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:02.813447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:02.813463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:02.813485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:02.813501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:02.813525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:02.813540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:02.813563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:103616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:02.813578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:09.469351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:09.469434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:09.469488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:09.469504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:09.469525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:09.469540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:09.469560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:09.469574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:09.469610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:09.469625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:09.469645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:09.469659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:09.469680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:09.469694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:09.469714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.902 [2024-05-15 13:46:09.469728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:09.469761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.902 [2024-05-15 13:46:09.469792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:09.469814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.902 [2024-05-15 13:46:09.469829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:09.469851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.902 [2024-05-15 13:46:09.469866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:09.469887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.902 [2024-05-15 13:46:09.469902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:09.469924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.902 [2024-05-15 13:46:09.469938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:09.469960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.902 [2024-05-15 13:46:09.469974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:09.469996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.902 [2024-05-15 13:46:09.470011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:09.470033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.902 [2024-05-15 13:46:09.470048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:09.470069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.902 [2024-05-15 13:46:09.470091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:09.470115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.902 [2024-05-15 13:46:09.470131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:09.470152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.902 [2024-05-15 13:46:09.470167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:37.902 [2024-05-15 13:46:09.470190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.903 [2024-05-15 13:46:09.470205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.470226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.903 [2024-05-15 13:46:09.470241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.470277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.903 [2024-05-15 13:46:09.470295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.470317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.903 [2024-05-15 13:46:09.470332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.470354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.903 [2024-05-15 13:46:09.470369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.470391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.903 [2024-05-15 13:46:09.470406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.470428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.903 [2024-05-15 13:46:09.470443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.470465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.903 [2024-05-15 13:46:09.470480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.470502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.903 [2024-05-15 13:46:09.470517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.470539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.903 [2024-05-15 13:46:09.470560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.470582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.903 [2024-05-15 13:46:09.470597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.470619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.903 [2024-05-15 13:46:09.470634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.470656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.903 [2024-05-15 13:46:09.470672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.470707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.903 [2024-05-15 13:46:09.470724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.470746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.903 [2024-05-15 13:46:09.470762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.470783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.903 [2024-05-15 13:46:09.470799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.470820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.903 [2024-05-15 13:46:09.470836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.470858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.903 [2024-05-15 13:46:09.470873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.470895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.903 [2024-05-15 13:46:09.470911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.470933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.903 [2024-05-15 13:46:09.470949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.470971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.903 [2024-05-15 13:46:09.470986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.471008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.903 [2024-05-15 13:46:09.471023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.471055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.903 [2024-05-15 13:46:09.471071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.471093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.903 [2024-05-15 13:46:09.471108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.471130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.903 [2024-05-15 13:46:09.471145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.471167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.903 [2024-05-15 13:46:09.471182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.471204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.903 [2024-05-15 13:46:09.471220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.471252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.903 [2024-05-15 13:46:09.471268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.471290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.903 [2024-05-15 13:46:09.471305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.471327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.903 [2024-05-15 13:46:09.471342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.471364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.903 [2024-05-15 13:46:09.471379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.471401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.903 [2024-05-15 13:46:09.471416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:37.903 [2024-05-15 13:46:09.471438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.904 [2024-05-15 13:46:09.471464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.471484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.904 [2024-05-15 13:46:09.471499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.471524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.904 [2024-05-15 13:46:09.471539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.471560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.904 [2024-05-15 13:46:09.471574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.471594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.904 [2024-05-15 13:46:09.471625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.471647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.904 [2024-05-15 13:46:09.471662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.471684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.904 [2024-05-15 13:46:09.471699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.471720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.904 [2024-05-15 13:46:09.471735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.471757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.904 [2024-05-15 13:46:09.471772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.471794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.904 [2024-05-15 13:46:09.471809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.471831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.904 [2024-05-15 13:46:09.471846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.471868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.904 [2024-05-15 13:46:09.471883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.471905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.904 [2024-05-15 13:46:09.471920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.471945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.904 [2024-05-15 13:46:09.471961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.471983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.904 [2024-05-15 13:46:09.472003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.904 [2024-05-15 13:46:09.472041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.904 [2024-05-15 13:46:09.472078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.904 [2024-05-15 13:46:09.472115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.904 [2024-05-15 13:46:09.472153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.904 [2024-05-15 13:46:09.472190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.904 [2024-05-15 13:46:09.472228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.904 [2024-05-15 13:46:09.472276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:62104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.904 [2024-05-15 13:46:09.472314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.904 [2024-05-15 13:46:09.472351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.904 [2024-05-15 13:46:09.472389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.904 [2024-05-15 13:46:09.472427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.904 [2024-05-15 13:46:09.472469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.904 [2024-05-15 13:46:09.472508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.904 [2024-05-15 13:46:09.472545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.904 [2024-05-15 13:46:09.472582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.904 [2024-05-15 13:46:09.472619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.904 [2024-05-15 13:46:09.472657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.904 [2024-05-15 13:46:09.472695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.904 [2024-05-15 13:46:09.472732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.904 [2024-05-15 13:46:09.472769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.904 [2024-05-15 13:46:09.472806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.904 [2024-05-15 13:46:09.472844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.904 [2024-05-15 13:46:09.472884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.904 [2024-05-15 13:46:09.472921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:37.904 [2024-05-15 13:46:09.472949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.904 [2024-05-15 13:46:09.472965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.472987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.905 [2024-05-15 13:46:09.473002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.905 [2024-05-15 13:46:09.473040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.905 [2024-05-15 13:46:09.473077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.905 [2024-05-15 13:46:09.473115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.905 [2024-05-15 13:46:09.473153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.905 [2024-05-15 13:46:09.473189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.905 [2024-05-15 13:46:09.473227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.905 [2024-05-15 13:46:09.473276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.905 [2024-05-15 13:46:09.473313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.905 [2024-05-15 13:46:09.473350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.905 [2024-05-15 13:46:09.473387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.905 [2024-05-15 13:46:09.473431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.905 [2024-05-15 13:46:09.473468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:62240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.905 [2024-05-15 13:46:09.473506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.905 [2024-05-15 13:46:09.473543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:62256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.905 [2024-05-15 13:46:09.473580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:62264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.905 [2024-05-15 13:46:09.473617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.905 [2024-05-15 13:46:09.473655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.905 [2024-05-15 13:46:09.473692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.905 [2024-05-15 13:46:09.473731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.905 [2024-05-15 13:46:09.473780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.905 [2024-05-15 13:46:09.473818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.905 [2024-05-15 13:46:09.473855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.905 [2024-05-15 13:46:09.473898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.905 [2024-05-15 13:46:09.473935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.905 [2024-05-15 13:46:09.473973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.473994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.905 [2024-05-15 13:46:09.474009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.474031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.905 [2024-05-15 13:46:09.474046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.474069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.905 [2024-05-15 13:46:09.474084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.474106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.905 [2024-05-15 13:46:09.474121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.474143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.905 [2024-05-15 13:46:09.474159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.474181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.905 [2024-05-15 13:46:09.474197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.474219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.905 [2024-05-15 13:46:09.474234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.474264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.905 [2024-05-15 13:46:09.474280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.474303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.905 [2024-05-15 13:46:09.474318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.474340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.905 [2024-05-15 13:46:09.474366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:09.474786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.905 [2024-05-15 13:46:09.474813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:16.491731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.905 [2024-05-15 13:46:16.491797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:16.491850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.905 [2024-05-15 13:46:16.491868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:37.905 [2024-05-15 13:46:16.491890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.905 [2024-05-15 13:46:16.491905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.491926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.906 [2024-05-15 13:46:16.491941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.491963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.906 [2024-05-15 13:46:16.491978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.491999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.906 [2024-05-15 13:46:16.492014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.906 [2024-05-15 13:46:16.492050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.906 [2024-05-15 13:46:16.492086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-05-15 13:46:16.492122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:58448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-05-15 13:46:16.492158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-05-15 13:46:16.492194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-05-15 13:46:16.492259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:58472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-05-15 13:46:16.492295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:58480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-05-15 13:46:16.492331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:58488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-05-15 13:46:16.492367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:58496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-05-15 13:46:16.492403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-05-15 13:46:16.492439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:58512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-05-15 13:46:16.492477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-05-15 13:46:16.492513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:58528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-05-15 13:46:16.492549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-05-15 13:46:16.492585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-05-15 13:46:16.492621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-05-15 13:46:16.492658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-05-15 13:46:16.492702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-05-15 13:46:16.492738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:58576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-05-15 13:46:16.492775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:58584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-05-15 13:46:16.492812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-05-15 13:46:16.492849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-05-15 13:46:16.492886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:58608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-05-15 13:46:16.492922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-05-15 13:46:16.492959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.492991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.906 [2024-05-15 13:46:16.493005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.493037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.906 [2024-05-15 13:46:16.493053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.493073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.906 [2024-05-15 13:46:16.493087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.493107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.906 [2024-05-15 13:46:16.493122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.493147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.906 [2024-05-15 13:46:16.493162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.493182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.906 [2024-05-15 13:46:16.493196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.493216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.906 [2024-05-15 13:46:16.493230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.493259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.906 [2024-05-15 13:46:16.493274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.493294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.906 [2024-05-15 13:46:16.493309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.493329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.906 [2024-05-15 13:46:16.493343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:37.906 [2024-05-15 13:46:16.493363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.906 [2024-05-15 13:46:16.493377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.493397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.907 [2024-05-15 13:46:16.493412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.493432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.907 [2024-05-15 13:46:16.493446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.493466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:58632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.907 [2024-05-15 13:46:16.493480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.493500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:58640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.907 [2024-05-15 13:46:16.493514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.493535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:58648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.907 [2024-05-15 13:46:16.493549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.493569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:58656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.907 [2024-05-15 13:46:16.493589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.493609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.907 [2024-05-15 13:46:16.493623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.493643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.907 [2024-05-15 13:46:16.493658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.493678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:58680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.907 [2024-05-15 13:46:16.493692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.493712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.907 [2024-05-15 13:46:16.493726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.493747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.907 [2024-05-15 13:46:16.493773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.493811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.907 [2024-05-15 13:46:16.493826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.493848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.907 [2024-05-15 13:46:16.493862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.493884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.907 [2024-05-15 13:46:16.493899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.493924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.907 [2024-05-15 13:46:16.493939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.493961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.907 [2024-05-15 13:46:16.493976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.493997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.907 [2024-05-15 13:46:16.494012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.494034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.907 [2024-05-15 13:46:16.494054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.494076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.907 [2024-05-15 13:46:16.494091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.494112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.907 [2024-05-15 13:46:16.494127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.494149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.907 [2024-05-15 13:46:16.494164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.494186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.907 [2024-05-15 13:46:16.494201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.494223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.907 [2024-05-15 13:46:16.494238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.494270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.907 [2024-05-15 13:46:16.494286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.494308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.907 [2024-05-15 13:46:16.494323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.494345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:58720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.907 [2024-05-15 13:46:16.494360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.494382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.907 [2024-05-15 13:46:16.494397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.494418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.907 [2024-05-15 13:46:16.494433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.494455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.907 [2024-05-15 13:46:16.494470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.494491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.907 [2024-05-15 13:46:16.494506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.494533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.907 [2024-05-15 13:46:16.494548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.494570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.907 [2024-05-15 13:46:16.494585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.494606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.907 [2024-05-15 13:46:16.494621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:37.907 [2024-05-15 13:46:16.494643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:58784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.907 [2024-05-15 13:46:16.494658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.494679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.908 [2024-05-15 13:46:16.494694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.494716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.908 [2024-05-15 13:46:16.494731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.494753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.908 [2024-05-15 13:46:16.494768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.494791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.908 [2024-05-15 13:46:16.494806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.494830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.908 [2024-05-15 13:46:16.494846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.494867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.908 [2024-05-15 13:46:16.494882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.494904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.908 [2024-05-15 13:46:16.494919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.494942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.908 [2024-05-15 13:46:16.494958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.494984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.908 [2024-05-15 13:46:16.494999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.908 [2024-05-15 13:46:16.495036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.908 [2024-05-15 13:46:16.495073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.908 [2024-05-15 13:46:16.495110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.908 [2024-05-15 13:46:16.495148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.908 [2024-05-15 13:46:16.495184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.908 [2024-05-15 13:46:16.495221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.908 [2024-05-15 13:46:16.495268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.908 [2024-05-15 13:46:16.495305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.908 [2024-05-15 13:46:16.495343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.908 [2024-05-15 13:46:16.495380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.908 [2024-05-15 13:46:16.495417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.908 [2024-05-15 13:46:16.495460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.908 [2024-05-15 13:46:16.495497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.908 [2024-05-15 13:46:16.495534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:58848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.908 [2024-05-15 13:46:16.495571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.908 [2024-05-15 13:46:16.495608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.908 [2024-05-15 13:46:16.495646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:58872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.908 [2024-05-15 13:46:16.495683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.908 [2024-05-15 13:46:16.495730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.908 [2024-05-15 13:46:16.495767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.908 [2024-05-15 13:46:16.495802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.908 [2024-05-15 13:46:16.495837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.908 [2024-05-15 13:46:16.495872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.908 [2024-05-15 13:46:16.495912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.908 [2024-05-15 13:46:16.495947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.495968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:58936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.908 [2024-05-15 13:46:16.495982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.496003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.908 [2024-05-15 13:46:16.496017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.496041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.908 [2024-05-15 13:46:16.496056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.496076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.908 [2024-05-15 13:46:16.496090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.496111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.908 [2024-05-15 13:46:16.496142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:37.908 [2024-05-15 13:46:16.496164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.909 [2024-05-15 13:46:16.496179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:16.496201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.909 [2024-05-15 13:46:16.496216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:16.496238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.909 [2024-05-15 13:46:16.496253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:16.496284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.909 [2024-05-15 13:46:16.496300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:16.496321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.909 [2024-05-15 13:46:16.496337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:16.496360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.909 [2024-05-15 13:46:16.496376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:16.496404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.909 [2024-05-15 13:46:16.496420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:16.496442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.909 [2024-05-15 13:46:16.496457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:16.496479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.909 [2024-05-15 13:46:16.496494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:16.496516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.909 [2024-05-15 13:46:16.496532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:16.496553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.909 [2024-05-15 13:46:16.496569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:16.496591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.909 [2024-05-15 13:46:16.496607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:16.496970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.909 [2024-05-15 13:46:16.496994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.932175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.909 [2024-05-15 13:46:29.932248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.932276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.909 [2024-05-15 13:46:29.932292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.932319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.909 [2024-05-15 13:46:29.932333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.932348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.909 [2024-05-15 13:46:29.932362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.932377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.909 [2024-05-15 13:46:29.932391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.932406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.909 [2024-05-15 13:46:29.932433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.932448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.909 [2024-05-15 13:46:29.932462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.932478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.909 [2024-05-15 13:46:29.932491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.932507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.909 [2024-05-15 13:46:29.932521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.932536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.909 [2024-05-15 13:46:29.932549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.932564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.909 [2024-05-15 13:46:29.932578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.932593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.909 [2024-05-15 13:46:29.932606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.932621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.909 [2024-05-15 13:46:29.932635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.932650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.909 [2024-05-15 13:46:29.932664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.932679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.909 [2024-05-15 13:46:29.932710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.932726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.909 [2024-05-15 13:46:29.932740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.932756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.909 [2024-05-15 13:46:29.932771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.932789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.909 [2024-05-15 13:46:29.932804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.932826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.909 [2024-05-15 13:46:29.932841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.932857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.909 [2024-05-15 13:46:29.932872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.932888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.909 [2024-05-15 13:46:29.932903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.932919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.909 [2024-05-15 13:46:29.932935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.932951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.909 [2024-05-15 13:46:29.932967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.932983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.909 [2024-05-15 13:46:29.932998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.933014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.909 [2024-05-15 13:46:29.933029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.933045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.909 [2024-05-15 13:46:29.933060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.933077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.909 [2024-05-15 13:46:29.933092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.909 [2024-05-15 13:46:29.933108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.910 [2024-05-15 13:46:29.933123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.910 [2024-05-15 13:46:29.933154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.910 [2024-05-15 13:46:29.933185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.910 [2024-05-15 13:46:29.933221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.910 [2024-05-15 13:46:29.933252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.910 [2024-05-15 13:46:29.933292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.910 [2024-05-15 13:46:29.933324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.910 [2024-05-15 13:46:29.933355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.910 [2024-05-15 13:46:29.933387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.910 [2024-05-15 13:46:29.933418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.910 [2024-05-15 13:46:29.933449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.910 [2024-05-15 13:46:29.933481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.910 [2024-05-15 13:46:29.933512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.910 [2024-05-15 13:46:29.933543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.910 [2024-05-15 13:46:29.933574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.910 [2024-05-15 13:46:29.933606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.910 [2024-05-15 13:46:29.933641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.910 [2024-05-15 13:46:29.933672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.910 [2024-05-15 13:46:29.933703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.910 [2024-05-15 13:46:29.933734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.910 [2024-05-15 13:46:29.933775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.910 [2024-05-15 13:46:29.933817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.910 [2024-05-15 13:46:29.933865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.910 [2024-05-15 13:46:29.933896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.910 [2024-05-15 13:46:29.933927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.910 [2024-05-15 13:46:29.933958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.933975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.910 [2024-05-15 13:46:29.933990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.934006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.910 [2024-05-15 13:46:29.934021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.934037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.910 [2024-05-15 13:46:29.934052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.934073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.910 [2024-05-15 13:46:29.934088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.934105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.910 [2024-05-15 13:46:29.934120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.934136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.910 [2024-05-15 13:46:29.934151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.934167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.910 [2024-05-15 13:46:29.934182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.934198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.910 [2024-05-15 13:46:29.934213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.934229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.910 [2024-05-15 13:46:29.934244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.934261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.910 [2024-05-15 13:46:29.934284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.934301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.910 [2024-05-15 13:46:29.934316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.934333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.910 [2024-05-15 13:46:29.934348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.934365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.910 [2024-05-15 13:46:29.934380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.910 [2024-05-15 13:46:29.934397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.910 [2024-05-15 13:46:29.934411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.934428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.911 [2024-05-15 13:46:29.934442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.934459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.911 [2024-05-15 13:46:29.934481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.934498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.911 [2024-05-15 13:46:29.934513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.934529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.911 [2024-05-15 13:46:29.934545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.934561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.911 [2024-05-15 13:46:29.934576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.934596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.911 [2024-05-15 13:46:29.934611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.934627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.911 [2024-05-15 13:46:29.934642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.934658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.911 [2024-05-15 13:46:29.934673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.934689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.911 [2024-05-15 13:46:29.934704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.934720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.911 [2024-05-15 13:46:29.934735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.934751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.911 [2024-05-15 13:46:29.934766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.934783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.911 [2024-05-15 13:46:29.934797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.934813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.911 [2024-05-15 13:46:29.934828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.934855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.911 [2024-05-15 13:46:29.934869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.934890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.911 [2024-05-15 13:46:29.934904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.934919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.911 [2024-05-15 13:46:29.934933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.934949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.911 [2024-05-15 13:46:29.934962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.934978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.911 [2024-05-15 13:46:29.934992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.935007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.911 [2024-05-15 13:46:29.935021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.935036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.911 [2024-05-15 13:46:29.935050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.935066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.911 [2024-05-15 13:46:29.935080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.935096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.911 [2024-05-15 13:46:29.935110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.935126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.911 [2024-05-15 13:46:29.935140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.935155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.911 [2024-05-15 13:46:29.935169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.935184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.911 [2024-05-15 13:46:29.935198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.935213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.911 [2024-05-15 13:46:29.935227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.935242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.911 [2024-05-15 13:46:29.935269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.935284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.911 [2024-05-15 13:46:29.935298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.935314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.911 [2024-05-15 13:46:29.935327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.935343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.911 [2024-05-15 13:46:29.935357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.935372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.911 [2024-05-15 13:46:29.935387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.935402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.911 [2024-05-15 13:46:29.935416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.935432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.911 [2024-05-15 13:46:29.935446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.935461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.911 [2024-05-15 13:46:29.935475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.935491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.911 [2024-05-15 13:46:29.935504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.935520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.911 [2024-05-15 13:46:29.935534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.935549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.911 [2024-05-15 13:46:29.935563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.935580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.911 [2024-05-15 13:46:29.935594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.935610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.911 [2024-05-15 13:46:29.935624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.935639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.911 [2024-05-15 13:46:29.935657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.911 [2024-05-15 13:46:29.935672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.911 [2024-05-15 13:46:29.935687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.935702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.912 [2024-05-15 13:46:29.935716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.935731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.912 [2024-05-15 13:46:29.935745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.935761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.912 [2024-05-15 13:46:29.935775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.935790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.912 [2024-05-15 13:46:29.935804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.935837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.912 [2024-05-15 13:46:29.935851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.935868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.912 [2024-05-15 13:46:29.935883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.935899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.912 [2024-05-15 13:46:29.935914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.935930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.912 [2024-05-15 13:46:29.935945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.935961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.912 [2024-05-15 13:46:29.935976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.935992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.912 [2024-05-15 13:46:29.936007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.936024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.912 [2024-05-15 13:46:29.936039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.936060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:37.912 [2024-05-15 13:46:29.936074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.936093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.912 [2024-05-15 13:46:29.936108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.936124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.912 [2024-05-15 13:46:29.936139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.936156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.912 [2024-05-15 13:46:29.936171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.936187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.912 [2024-05-15 13:46:29.936202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.936218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.912 [2024-05-15 13:46:29.936233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.936250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.912 [2024-05-15 13:46:29.936273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.936290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.912 [2024-05-15 13:46:29.936305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.936366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:37.912 [2024-05-15 13:46:29.936380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:37.912 [2024-05-15 13:46:29.936391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10032 len:8 PRP1 0x0 PRP2 0x0 00:28:37.912 [2024-05-15 13:46:29.936406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.936462] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11f5980 was disconnected and freed. reset controller. 00:28:37.912 [2024-05-15 13:46:29.936561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.912 [2024-05-15 13:46:29.936581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.936597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.912 [2024-05-15 13:46:29.936612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.936628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.912 [2024-05-15 13:46:29.936651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.936666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:37.912 [2024-05-15 13:46:29.936681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:37.912 [2024-05-15 13:46:29.936696] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1204170 is same with the state(5) to be set 00:28:37.912 [2024-05-15 13:46:29.937675] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:37.912 [2024-05-15 13:46:29.937712] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1204170 (9): Bad file descriptor 00:28:37.912 [2024-05-15 13:46:29.938008] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.912 [2024-05-15 13:46:29.938105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.912 [2024-05-15 13:46:29.938151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.912 [2024-05-15 13:46:29.938169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1204170 with addr=10.0.0.2, port=4421 00:28:37.912 [2024-05-15 13:46:29.938186] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1204170 is same with the state(5) to be set 00:28:37.912 [2024-05-15 13:46:29.938351] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1204170 (9): Bad file descriptor 00:28:37.912 [2024-05-15 13:46:29.938401] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:37.912 [2024-05-15 13:46:29.938419] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:37.912 [2024-05-15 13:46:29.938435] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:37.912 [2024-05-15 13:46:29.938464] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:37.912 [2024-05-15 13:46:29.938478] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:37.912 [2024-05-15 13:46:40.029229] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:37.912 Received shutdown signal, test time was about 55.664482 seconds 00:28:37.912 00:28:37.912 Latency(us) 00:28:37.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.912 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:37.912 Verification LBA range: start 0x0 length 0x4000 00:28:37.912 Nvme0n1 : 55.66 7432.14 29.03 0.00 0.00 17195.23 827.00 7030452.42 00:28:37.912 =================================================================================================================== 00:28:37.913 Total : 7432.14 29.03 0.00 0.00 17195.23 827.00 7030452.42 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:37.913 rmmod nvme_tcp 00:28:37.913 rmmod nvme_fabrics 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 96214 ']' 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 96214 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 96214 ']' 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 96214 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 96214 00:28:37.913 killing process with pid 96214 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 96214' 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 96214 00:28:37.913 [2024-05-15 13:46:50.679788] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 96214 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:37.913 ************************************ 00:28:37.913 END TEST nvmf_host_multipath 00:28:37.913 ************************************ 00:28:37.913 00:28:37.913 real 1m0.854s 00:28:37.913 user 2m45.336s 00:28:37.913 sys 0m22.626s 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:37.913 13:46:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:37.913 13:46:50 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:37.913 13:46:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:37.913 13:46:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:37.913 13:46:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:37.913 ************************************ 00:28:37.913 START TEST nvmf_timeout 00:28:37.913 ************************************ 00:28:37.913 13:46:50 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:38.171 * Looking for test storage... 00:28:38.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:38.171 13:46:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:38.172 Cannot find device "nvmf_tgt_br" 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:38.172 Cannot find device "nvmf_tgt_br2" 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:38.172 Cannot find device "nvmf_tgt_br" 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:38.172 Cannot find device "nvmf_tgt_br2" 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:38.172 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:28:38.172 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:38.172 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:38.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:38.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:28:38.430 00:28:38.430 --- 10.0.0.2 ping statistics --- 00:28:38.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.430 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:38.430 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:38.430 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:28:38.430 00:28:38.430 --- 10.0.0.3 ping statistics --- 00:28:38.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.430 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:28:38.430 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:38.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:38.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:28:38.431 00:28:38.431 --- 10.0.0.1 ping statistics --- 00:28:38.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.431 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:28:38.431 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:38.431 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:28:38.431 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:38.431 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:38.431 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:38.431 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:38.431 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:38.431 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:38.431 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:38.431 13:46:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:28:38.431 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:38.431 13:46:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:38.431 13:46:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:38.431 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=97365 00:28:38.431 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:38.431 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 97365 00:28:38.431 13:46:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 97365 ']' 00:28:38.431 13:46:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:38.431 13:46:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:38.431 13:46:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:38.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:38.431 13:46:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:38.431 13:46:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:38.688 [2024-05-15 13:46:51.593868] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:28:38.688 [2024-05-15 13:46:51.594415] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:38.688 [2024-05-15 13:46:51.753391] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:38.689 [2024-05-15 13:46:51.769992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:38.946 [2024-05-15 13:46:51.851121] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:38.946 [2024-05-15 13:46:51.851567] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:38.946 [2024-05-15 13:46:51.851823] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:38.946 [2024-05-15 13:46:51.852041] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:38.946 [2024-05-15 13:46:51.852194] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:38.946 [2024-05-15 13:46:51.852487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.946 [2024-05-15 13:46:51.852503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.946 13:46:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:38.946 13:46:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:28:38.946 13:46:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:38.946 13:46:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:38.946 13:46:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:38.946 13:46:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:38.946 13:46:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:38.946 13:46:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:39.204 [2024-05-15 13:46:52.246473] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:39.204 13:46:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:39.461 Malloc0 00:28:39.461 13:46:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:39.718 13:46:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:39.976 13:46:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:40.234 [2024-05-15 13:46:53.302496] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:40.234 [2024-05-15 13:46:53.303067] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:40.234 13:46:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:40.234 13:46:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=97407 00:28:40.234 13:46:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 97407 /var/tmp/bdevperf.sock 00:28:40.234 13:46:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 97407 ']' 00:28:40.234 13:46:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:40.234 13:46:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:40.234 13:46:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:40.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:40.234 13:46:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:40.234 13:46:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:40.491 [2024-05-15 13:46:53.360986] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:28:40.491 [2024-05-15 13:46:53.361484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97407 ] 00:28:40.491 [2024-05-15 13:46:53.483727] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:40.491 [2024-05-15 13:46:53.497459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.491 [2024-05-15 13:46:53.574252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:40.748 13:46:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:40.748 13:46:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:28:40.748 13:46:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:41.005 13:46:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:28:41.263 NVMe0n1 00:28:41.263 13:46:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=97423 00:28:41.263 13:46:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:41.263 13:46:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:28:41.521 Running I/O for 10 seconds... 00:28:42.454 13:46:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:42.716 [2024-05-15 13:46:55.609332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.716 [2024-05-15 13:46:55.609722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.716 [2024-05-15 13:46:55.609952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.716 [2024-05-15 13:46:55.610167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.716 [2024-05-15 13:46:55.610430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.716 [2024-05-15 13:46:55.610624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.716 [2024-05-15 13:46:55.610786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.716 [2024-05-15 13:46:55.610901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.716 [2024-05-15 13:46:55.611120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.716 [2024-05-15 13:46:55.611368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.716 [2024-05-15 13:46:55.611601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.716 [2024-05-15 13:46:55.611819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.716 [2024-05-15 13:46:55.612038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.716 [2024-05-15 13:46:55.612259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.716 [2024-05-15 13:46:55.612471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.716 [2024-05-15 13:46:55.612667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.612907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.717 [2024-05-15 13:46:55.613082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.613309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.717 [2024-05-15 13:46:55.613457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.613574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.717 [2024-05-15 13:46:55.613797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.614025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.717 [2024-05-15 13:46:55.614231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.614497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.717 [2024-05-15 13:46:55.614715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.614944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.717 [2024-05-15 13:46:55.615164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.615387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.717 [2024-05-15 13:46:55.615601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.615829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.717 [2024-05-15 13:46:55.616048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.616287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.717 [2024-05-15 13:46:55.616530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.616780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.717 [2024-05-15 13:46:55.616969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.617156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.717 [2024-05-15 13:46:55.617360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.617559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.717 [2024-05-15 13:46:55.617739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.617953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.717 [2024-05-15 13:46:55.618138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.618355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.717 [2024-05-15 13:46:55.618545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.618752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.717 [2024-05-15 13:46:55.618944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.619136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.717 [2024-05-15 13:46:55.619340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.619532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.717 [2024-05-15 13:46:55.619720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.619917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.717 [2024-05-15 13:46:55.620104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.620318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.717 [2024-05-15 13:46:55.620511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.620705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.717 [2024-05-15 13:46:55.620903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.621100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.717 [2024-05-15 13:46:55.621304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.621513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:88048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.717 [2024-05-15 13:46:55.621704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.621947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.717 [2024-05-15 13:46:55.622127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.622356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.717 [2024-05-15 13:46:55.622558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.622763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:88072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.717 [2024-05-15 13:46:55.622941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.623134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:88080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.717 [2024-05-15 13:46:55.623341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.623533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.717 [2024-05-15 13:46:55.623720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.623918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.717 [2024-05-15 13:46:55.624104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.624317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:88104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.717 [2024-05-15 13:46:55.624516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.717 [2024-05-15 13:46:55.624719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.718 [2024-05-15 13:46:55.624908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.625100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.718 [2024-05-15 13:46:55.625303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.625498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:88128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.718 [2024-05-15 13:46:55.625688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.625901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.718 [2024-05-15 13:46:55.626092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.626304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.718 [2024-05-15 13:46:55.626490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.626690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:88152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.718 [2024-05-15 13:46:55.626877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.627078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:88160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.718 [2024-05-15 13:46:55.627282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.627438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:88168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.718 [2024-05-15 13:46:55.627472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.627499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.718 [2024-05-15 13:46:55.627523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.627548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.718 [2024-05-15 13:46:55.627568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.627587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:88192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.718 [2024-05-15 13:46:55.627603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.627621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.718 [2024-05-15 13:46:55.627638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.627656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.718 [2024-05-15 13:46:55.627672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.627691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.718 [2024-05-15 13:46:55.627706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.627724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.718 [2024-05-15 13:46:55.627740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.627759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:88680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.718 [2024-05-15 13:46:55.627775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.627793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.718 [2024-05-15 13:46:55.627809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.627827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.718 [2024-05-15 13:46:55.627843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.627861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.718 [2024-05-15 13:46:55.627876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.627894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.718 [2024-05-15 13:46:55.627911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.627930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.718 [2024-05-15 13:46:55.627946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.627964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.718 [2024-05-15 13:46:55.627980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.627998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.718 [2024-05-15 13:46:55.628014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.628032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.718 [2024-05-15 13:46:55.628048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.628066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.718 [2024-05-15 13:46:55.628081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.628099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.718 [2024-05-15 13:46:55.628115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.628133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.718 [2024-05-15 13:46:55.628149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.628167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:88200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.718 [2024-05-15 13:46:55.628183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.628201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.718 [2024-05-15 13:46:55.628217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.718 [2024-05-15 13:46:55.628251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-05-15 13:46:55.628273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.628292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:88224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-05-15 13:46:55.628308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.628326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-05-15 13:46:55.628342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.628360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-05-15 13:46:55.628378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.628396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:88248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-05-15 13:46:55.628412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.628430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-05-15 13:46:55.628446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.628464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-05-15 13:46:55.628480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.628498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-05-15 13:46:55.628516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.628534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-05-15 13:46:55.628550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.628569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-05-15 13:46:55.628585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.628602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-05-15 13:46:55.628619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.628637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-05-15 13:46:55.628653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.628671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:88312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-05-15 13:46:55.628687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.628705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.719 [2024-05-15 13:46:55.628721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.628739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.719 [2024-05-15 13:46:55.628755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.628773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.719 [2024-05-15 13:46:55.628789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.628809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:88792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.719 [2024-05-15 13:46:55.628825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.628843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.719 [2024-05-15 13:46:55.628859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.628876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.719 [2024-05-15 13:46:55.628892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.628910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:88816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.719 [2024-05-15 13:46:55.628926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.628945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.719 [2024-05-15 13:46:55.628961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.628979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:88832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.719 [2024-05-15 13:46:55.628994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.629012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.719 [2024-05-15 13:46:55.629028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.629047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.719 [2024-05-15 13:46:55.629063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.629081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:88856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.719 [2024-05-15 13:46:55.629097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.629115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.719 [2024-05-15 13:46:55.629131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.629149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.719 [2024-05-15 13:46:55.629165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.629183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.719 [2024-05-15 13:46:55.629199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.629217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.719 [2024-05-15 13:46:55.629253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.719 [2024-05-15 13:46:55.629279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.719 [2024-05-15 13:46:55.629297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.629322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:88328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.720 [2024-05-15 13:46:55.629338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.629357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.720 [2024-05-15 13:46:55.629373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.629391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.720 [2024-05-15 13:46:55.629407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.629425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.720 [2024-05-15 13:46:55.629441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.629460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.720 [2024-05-15 13:46:55.629475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.629493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.720 [2024-05-15 13:46:55.629509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.629527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.720 [2024-05-15 13:46:55.629543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.629560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:88384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.720 [2024-05-15 13:46:55.629576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.629598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.720 [2024-05-15 13:46:55.629618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.629638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.720 [2024-05-15 13:46:55.629654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.629672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.720 [2024-05-15 13:46:55.629688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.629706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:88416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.720 [2024-05-15 13:46:55.629722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.629746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.720 [2024-05-15 13:46:55.629765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.629800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:88432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.720 [2024-05-15 13:46:55.629824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.629849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.720 [2024-05-15 13:46:55.629870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.629893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:88448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.720 [2024-05-15 13:46:55.629914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.629936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:88904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.720 [2024-05-15 13:46:55.629959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.629983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.720 [2024-05-15 13:46:55.630006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.630030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.720 [2024-05-15 13:46:55.630053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.630078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.720 [2024-05-15 13:46:55.630100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.630120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.720 [2024-05-15 13:46:55.630138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.630162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.720 [2024-05-15 13:46:55.630185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.630212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.720 [2024-05-15 13:46:55.630252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.630279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.720 [2024-05-15 13:46:55.630300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.630332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.720 [2024-05-15 13:46:55.630352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.630377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.720 [2024-05-15 13:46:55.630396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.630411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:88472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.720 [2024-05-15 13:46:55.630428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.630451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:88480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.720 [2024-05-15 13:46:55.630473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.630499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.720 [2024-05-15 13:46:55.630522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.720 [2024-05-15 13:46:55.630545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:88496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.721 [2024-05-15 13:46:55.630559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.721 [2024-05-15 13:46:55.630574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:88504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:42.721 [2024-05-15 13:46:55.630590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.721 [2024-05-15 13:46:55.630614] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b2a20 is same with the state(5) to be set 00:28:42.721 [2024-05-15 13:46:55.630646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:42.721 [2024-05-15 13:46:55.630665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:42.721 [2024-05-15 13:46:55.630686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88512 len:8 PRP1 0x0 PRP2 0x0 00:28:42.721 [2024-05-15 13:46:55.630708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.721 [2024-05-15 13:46:55.630806] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12b2a20 was disconnected and freed. reset controller. 00:28:42.721 [2024-05-15 13:46:55.631014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.721 [2024-05-15 13:46:55.631060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.721 [2024-05-15 13:46:55.631087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.721 [2024-05-15 13:46:55.631110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.721 [2024-05-15 13:46:55.631134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.721 [2024-05-15 13:46:55.631156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.721 [2024-05-15 13:46:55.631177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.721 [2024-05-15 13:46:55.631193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.721 [2024-05-15 13:46:55.631208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7910 is same with the state(5) to be set 00:28:42.721 [2024-05-15 13:46:55.631503] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.721 [2024-05-15 13:46:55.631538] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b7910 (9): Bad file descriptor 00:28:42.721 [2024-05-15 13:46:55.631653] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.721 [2024-05-15 13:46:55.631725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.721 [2024-05-15 13:46:55.631771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.721 [2024-05-15 13:46:55.631790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b7910 with addr=10.0.0.2, port=4420 00:28:42.721 [2024-05-15 13:46:55.631811] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7910 is same with the state(5) to be set 00:28:42.721 [2024-05-15 13:46:55.631834] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b7910 (9): Bad file descriptor 00:28:42.721 [2024-05-15 13:46:55.631856] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:42.721 [2024-05-15 13:46:55.631872] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:42.721 [2024-05-15 13:46:55.631888] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:42.721 [2024-05-15 13:46:55.631913] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.721 [2024-05-15 13:46:55.631928] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:42.721 13:46:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:28:44.639 [2024-05-15 13:46:57.632092] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.639 [2024-05-15 13:46:57.632201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.639 [2024-05-15 13:46:57.632271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.639 [2024-05-15 13:46:57.632289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b7910 with addr=10.0.0.2, port=4420 00:28:44.639 [2024-05-15 13:46:57.632304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7910 is same with the state(5) to be set 00:28:44.639 [2024-05-15 13:46:57.632331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b7910 (9): Bad file descriptor 00:28:44.639 [2024-05-15 13:46:57.632351] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:44.639 [2024-05-15 13:46:57.632362] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:44.639 [2024-05-15 13:46:57.632375] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:44.639 [2024-05-15 13:46:57.632403] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:44.639 [2024-05-15 13:46:57.632414] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:44.639 13:46:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:28:44.639 13:46:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:44.639 13:46:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:44.896 13:46:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:28:44.896 13:46:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:28:44.896 13:46:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:44.896 13:46:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:45.155 13:46:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:28:45.155 13:46:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:28:46.537 [2024-05-15 13:46:59.632574] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.537 [2024-05-15 13:46:59.632936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.537 [2024-05-15 13:46:59.633104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.537 [2024-05-15 13:46:59.633202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b7910 with addr=10.0.0.2, port=4420 00:28:46.537 [2024-05-15 13:46:59.633362] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7910 is same with the state(5) to be set 00:28:46.537 [2024-05-15 13:46:59.633490] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b7910 (9): Bad file descriptor 00:28:46.537 [2024-05-15 13:46:59.633562] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.537 [2024-05-15 13:46:59.633674] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.537 [2024-05-15 13:46:59.633738] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.537 [2024-05-15 13:46:59.633805] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.537 [2024-05-15 13:46:59.633942] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.114 [2024-05-15 13:47:01.634065] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.681 00:28:49.681 Latency(us) 00:28:49.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.681 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:49.681 Verification LBA range: start 0x0 length 0x4000 00:28:49.681 NVMe0n1 : 8.23 1336.31 5.22 15.56 0.00 94497.18 3542.06 7030452.42 00:28:49.681 =================================================================================================================== 00:28:49.681 Total : 1336.31 5.22 15.56 0.00 94497.18 3542.06 7030452.42 00:28:49.681 0 00:28:50.246 13:47:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:28:50.246 13:47:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:50.246 13:47:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:50.505 13:47:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:28:50.505 13:47:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:28:50.505 13:47:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:50.505 13:47:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:50.765 13:47:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:28:50.765 13:47:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 97423 00:28:50.765 13:47:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 97407 00:28:50.765 13:47:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 97407 ']' 00:28:50.765 13:47:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 97407 00:28:50.765 13:47:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:28:50.765 13:47:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:50.765 13:47:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 97407 00:28:50.765 killing process with pid 97407 00:28:50.765 Received shutdown signal, test time was about 9.323472 seconds 00:28:50.765 00:28:50.765 Latency(us) 00:28:50.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.765 =================================================================================================================== 00:28:50.765 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:50.765 13:47:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:28:50.765 13:47:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:28:50.765 13:47:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 97407' 00:28:50.765 13:47:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 97407 00:28:50.765 13:47:03 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 97407 00:28:51.024 13:47:03 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:51.282 [2024-05-15 13:47:04.151889] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:51.282 13:47:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=97545 00:28:51.282 13:47:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:51.282 13:47:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 97545 /var/tmp/bdevperf.sock 00:28:51.282 13:47:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 97545 ']' 00:28:51.282 13:47:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:51.282 13:47:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:51.282 13:47:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:51.282 13:47:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:51.282 13:47:04 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:51.282 [2024-05-15 13:47:04.224035] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:28:51.282 [2024-05-15 13:47:04.224416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97545 ] 00:28:51.282 [2024-05-15 13:47:04.352963] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:51.282 [2024-05-15 13:47:04.364345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.540 [2024-05-15 13:47:04.418029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:52.107 13:47:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:52.107 13:47:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:28:52.107 13:47:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:52.366 13:47:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:28:52.623 NVMe0n1 00:28:52.880 13:47:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=97563 00:28:52.880 13:47:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:52.880 13:47:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:28:52.880 Running I/O for 10 seconds... 00:28:53.812 13:47:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:54.072 [2024-05-15 13:47:06.947535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.072 [2024-05-15 13:47:06.947846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.947984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.072 [2024-05-15 13:47:06.948086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.948148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.072 [2024-05-15 13:47:06.948265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.948375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.072 [2024-05-15 13:47:06.948434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.948562] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d910 is same with the state(5) to be set 00:28:54.072 [2024-05-15 13:47:06.948749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.072 [2024-05-15 13:47:06.948859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.948978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.072 [2024-05-15 13:47:06.949178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.949291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.072 [2024-05-15 13:47:06.949357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.949413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.072 [2024-05-15 13:47:06.949522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.949579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.072 [2024-05-15 13:47:06.949659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.949759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.072 [2024-05-15 13:47:06.949877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.950002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.072 [2024-05-15 13:47:06.950096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.950213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.072 [2024-05-15 13:47:06.950324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.950385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.072 [2024-05-15 13:47:06.950482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.950578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:67848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.072 [2024-05-15 13:47:06.950636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.950756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.072 [2024-05-15 13:47:06.950811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.950903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.072 [2024-05-15 13:47:06.950993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.951109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.072 [2024-05-15 13:47:06.951204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.951279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.072 [2024-05-15 13:47:06.951376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.951434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.072 [2024-05-15 13:47:06.951529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.951588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:67224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.072 [2024-05-15 13:47:06.951680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.951772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.072 [2024-05-15 13:47:06.951870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.951962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:67240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.072 [2024-05-15 13:47:06.952058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.952178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.072 [2024-05-15 13:47:06.952285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.952432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.072 [2024-05-15 13:47:06.952489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.952612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.072 [2024-05-15 13:47:06.952666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.952721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:67272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.072 [2024-05-15 13:47:06.952831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.072 [2024-05-15 13:47:06.952886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:67280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.072 [2024-05-15 13:47:06.952981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.953041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:67288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.073 [2024-05-15 13:47:06.953180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.953250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:67296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.073 [2024-05-15 13:47:06.953371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.953427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.073 [2024-05-15 13:47:06.953481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.953575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.073 [2024-05-15 13:47:06.953629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.953739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:67320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.073 [2024-05-15 13:47:06.953813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.953881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.954001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.954129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.954280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.954378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:67888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.954470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.954554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.954608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.954679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:67904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.954856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.954978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.955080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.955182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:67920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.955290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.955413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.955506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.955612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:67936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.955703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.955792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.955901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.955995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.956102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.956221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.956324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.956388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.956491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.956559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.956650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.956717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.073 [2024-05-15 13:47:06.956807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.956873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.073 [2024-05-15 13:47:06.956963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.957029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:67344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.073 [2024-05-15 13:47:06.957119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.957185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:67352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.073 [2024-05-15 13:47:06.957282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.957397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.073 [2024-05-15 13:47:06.957507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.957616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.073 [2024-05-15 13:47:06.957711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.957821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.073 [2024-05-15 13:47:06.957988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.958149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.073 [2024-05-15 13:47:06.958275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.958385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:67984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.958463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.958530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:67992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.958603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.958667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.958781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.958847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.958921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.958983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:68016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.959045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.959107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:68024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.959197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.959328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:68032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.959394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.959467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.959527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.959616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.959676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.959777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.959914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.960052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.960185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.960312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:68072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.960405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.960470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:68080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.960576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.073 [2024-05-15 13:47:06.960692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.073 [2024-05-15 13:47:06.960759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.960844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:67392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.960905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.960969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.961030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.961124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:67408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.961300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.961433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:67416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.961575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.961714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.961856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.962002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.962133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.962280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:67440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.962424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.962557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.962690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.962826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.962958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:67464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.963225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:67472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.963366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.963390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.963414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.963438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.963461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.963484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.963508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.963531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.963554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:67544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.963577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.963600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.963624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:67568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.963647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.963672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:68096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-05-15 13:47:06.963696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:68104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-05-15 13:47:06.963719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:68112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-05-15 13:47:06.963742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:68120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-05-15 13:47:06.963765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-05-15 13:47:06.963788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-05-15 13:47:06.963812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-05-15 13:47:06.963835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:68152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.074 [2024-05-15 13:47:06.963857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:67584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.963880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.963904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.963926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.963952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.963975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.963989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:67624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.964000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.964013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:67632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.964024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.964037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.964047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.074 [2024-05-15 13:47:06.964060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.074 [2024-05-15 13:47:06.964071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.075 [2024-05-15 13:47:06.964083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:67656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.075 [2024-05-15 13:47:06.964094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.075 [2024-05-15 13:47:06.964106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.075 [2024-05-15 13:47:06.964117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.075 [2024-05-15 13:47:06.964130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:67672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.075 [2024-05-15 13:47:06.964140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.075 [2024-05-15 13:47:06.964153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.075 [2024-05-15 13:47:06.964164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.075 [2024-05-15 13:47:06.964176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.075 [2024-05-15 13:47:06.964187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.075 [2024-05-15 13:47:06.964199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:67696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.075 [2024-05-15 13:47:06.964210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.075 [2024-05-15 13:47:06.964223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.075 [2024-05-15 13:47:06.964233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.075 [2024-05-15 13:47:06.964262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:68160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-05-15 13:47:06.964273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.075 [2024-05-15 13:47:06.964285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-05-15 13:47:06.964296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.075 [2024-05-15 13:47:06.964308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-05-15 13:47:06.964319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.075 [2024-05-15 13:47:06.964332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:68184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-05-15 13:47:06.964343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.075 [2024-05-15 13:47:06.964355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-05-15 13:47:06.964366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.075 [2024-05-15 13:47:06.964379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:68200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-05-15 13:47:06.964389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.075 [2024-05-15 13:47:06.964402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:68208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-05-15 13:47:06.964412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.075 [2024-05-15 13:47:06.964425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:54.075 [2024-05-15 13:47:06.964435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.075 [2024-05-15 13:47:06.964448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.075 [2024-05-15 13:47:06.964459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.075 [2024-05-15 13:47:06.964471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.075 [2024-05-15 13:47:06.964482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.075 [2024-05-15 13:47:06.964495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.075 [2024-05-15 13:47:06.964506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.075 [2024-05-15 13:47:06.964518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.075 [2024-05-15 13:47:06.964528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.075 [2024-05-15 13:47:06.964541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.075 [2024-05-15 13:47:06.964552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.075 [2024-05-15 13:47:06.964564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.075 [2024-05-15 13:47:06.964575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.075 [2024-05-15 13:47:06.964587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.075 [2024-05-15 13:47:06.964598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.075 [2024-05-15 13:47:06.964651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:54.075 [2024-05-15 13:47:06.964661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:54.075 [2024-05-15 13:47:06.964671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67768 len:8 PRP1 0x0 PRP2 0x0 00:28:54.075 [2024-05-15 13:47:06.964681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.075 [2024-05-15 13:47:06.964751] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x838b40 was disconnected and freed. reset controller. 00:28:54.075 [2024-05-15 13:47:06.964831] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83d910 (9): Bad file descriptor 00:28:54.075 [2024-05-15 13:47:06.965047] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.075 [2024-05-15 13:47:06.965158] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.075 [2024-05-15 13:47:06.965217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.075 [2024-05-15 13:47:06.965268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.075 [2024-05-15 13:47:06.965284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x83d910 with addr=10.0.0.2, port=4420 00:28:54.075 [2024-05-15 13:47:06.965297] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d910 is same with the state(5) to be set 00:28:54.075 [2024-05-15 13:47:06.965316] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83d910 (9): Bad file descriptor 00:28:54.075 [2024-05-15 13:47:06.965333] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:54.075 [2024-05-15 13:47:06.965344] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:54.075 [2024-05-15 13:47:06.965357] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:54.075 [2024-05-15 13:47:06.965377] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:54.075 [2024-05-15 13:47:06.965388] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:54.075 13:47:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:28:55.009 [2024-05-15 13:47:07.965542] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.009 [2024-05-15 13:47:07.965908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.009 [2024-05-15 13:47:07.965992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:55.009 [2024-05-15 13:47:07.966102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x83d910 with addr=10.0.0.2, port=4420 00:28:55.009 [2024-05-15 13:47:07.966166] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d910 is same with the state(5) to be set 00:28:55.009 [2024-05-15 13:47:07.966321] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83d910 (9): Bad file descriptor 00:28:55.009 [2024-05-15 13:47:07.966505] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:55.009 [2024-05-15 13:47:07.966560] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:55.009 [2024-05-15 13:47:07.966664] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:55.009 [2024-05-15 13:47:07.966717] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:55.009 [2024-05-15 13:47:07.966752] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:55.009 13:47:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:55.266 [2024-05-15 13:47:08.238324] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.266 13:47:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 97563 00:28:56.199 [2024-05-15 13:47:08.981472] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:04.310 00:29:04.310 Latency(us) 00:29:04.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.310 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:04.310 Verification LBA range: start 0x0 length 0x4000 00:29:04.310 NVMe0n1 : 10.01 6651.41 25.98 0.00 0.00 19211.68 2028.50 3035877.18 00:29:04.310 =================================================================================================================== 00:29:04.310 Total : 6651.41 25.98 0.00 0.00 19211.68 2028.50 3035877.18 00:29:04.310 0 00:29:04.310 13:47:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=97672 00:29:04.310 13:47:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:04.310 13:47:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:29:04.310 Running I/O for 10 seconds... 00:29:04.310 13:47:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:04.310 [2024-05-15 13:47:17.174910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.310 [2024-05-15 13:47:17.175223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.310 [2024-05-15 13:47:17.175356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.310 [2024-05-15 13:47:17.175478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.310 [2024-05-15 13:47:17.175538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.310 [2024-05-15 13:47:17.175637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.310 [2024-05-15 13:47:17.175692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:04.310 [2024-05-15 13:47:17.175810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.310 [2024-05-15 13:47:17.175907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d910 is same with the state(5) to be set 00:29:04.310 [2024-05-15 13:47:17.176125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.310 [2024-05-15 13:47:17.176275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.310 [2024-05-15 13:47:17.176430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.310 [2024-05-15 13:47:17.176540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.310 [2024-05-15 13:47:17.176637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.310 [2024-05-15 13:47:17.176756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.310 [2024-05-15 13:47:17.176844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.310 [2024-05-15 13:47:17.176897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.310 [2024-05-15 13:47:17.176950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.310 [2024-05-15 13:47:17.177038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.310 [2024-05-15 13:47:17.177091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.310 [2024-05-15 13:47:17.177190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.310 [2024-05-15 13:47:17.177264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.310 [2024-05-15 13:47:17.177378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.310 [2024-05-15 13:47:17.177436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.310 [2024-05-15 13:47:17.177513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.310 [2024-05-15 13:47:17.177566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.310 [2024-05-15 13:47:17.177648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.310 [2024-05-15 13:47:17.177702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.310 [2024-05-15 13:47:17.177755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.310 [2024-05-15 13:47:17.177860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.310 [2024-05-15 13:47:17.177912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.178016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.311 [2024-05-15 13:47:17.178074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.178155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.311 [2024-05-15 13:47:17.178286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.178382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.311 [2024-05-15 13:47:17.178499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.178554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.311 [2024-05-15 13:47:17.178625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.178678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.311 [2024-05-15 13:47:17.178763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.178817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.311 [2024-05-15 13:47:17.178869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.178961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.311 [2024-05-15 13:47:17.179013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.179097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.311 [2024-05-15 13:47:17.179149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.179202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.311 [2024-05-15 13:47:17.179307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.179375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.311 [2024-05-15 13:47:17.179489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.179543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.311 [2024-05-15 13:47:17.179595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.179690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.311 [2024-05-15 13:47:17.179743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.179797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.311 [2024-05-15 13:47:17.179876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.179928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.311 [2024-05-15 13:47:17.180034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.180095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.311 [2024-05-15 13:47:17.180203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.180276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.311 [2024-05-15 13:47:17.180341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.180394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.311 [2024-05-15 13:47:17.180490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.180548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.311 [2024-05-15 13:47:17.180682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.180779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.311 [2024-05-15 13:47:17.180867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.180976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.311 [2024-05-15 13:47:17.181072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.181192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.311 [2024-05-15 13:47:17.181298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.181418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.311 [2024-05-15 13:47:17.181512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.181570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.311 [2024-05-15 13:47:17.181669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.181763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.311 [2024-05-15 13:47:17.181833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.181956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.311 [2024-05-15 13:47:17.182046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.182178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.311 [2024-05-15 13:47:17.182328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.182391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.311 [2024-05-15 13:47:17.182479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.182532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.311 [2024-05-15 13:47:17.182627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.311 [2024-05-15 13:47:17.182685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.311 [2024-05-15 13:47:17.182785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.182841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.312 [2024-05-15 13:47:17.182941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.183029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.312 [2024-05-15 13:47:17.183191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.183294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.312 [2024-05-15 13:47:17.183465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.183616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.312 [2024-05-15 13:47:17.183764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.183845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.312 [2024-05-15 13:47:17.183904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.183968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.312 [2024-05-15 13:47:17.184037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.184093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.312 [2024-05-15 13:47:17.184199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.184273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.312 [2024-05-15 13:47:17.184341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.184479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.312 [2024-05-15 13:47:17.184616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.184742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.312 [2024-05-15 13:47:17.184866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.184982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.312 [2024-05-15 13:47:17.185101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.185198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.312 [2024-05-15 13:47:17.185278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.185388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.312 [2024-05-15 13:47:17.185494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.185558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.312 [2024-05-15 13:47:17.185737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.185872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.312 [2024-05-15 13:47:17.185969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.186025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.312 [2024-05-15 13:47:17.186081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.186156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.312 [2024-05-15 13:47:17.186214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.186316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.312 [2024-05-15 13:47:17.186474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.186604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.312 [2024-05-15 13:47:17.186758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.186886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.312 [2024-05-15 13:47:17.187017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.187142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.312 [2024-05-15 13:47:17.187247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.187393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.312 [2024-05-15 13:47:17.187507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.187610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.312 [2024-05-15 13:47:17.187711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.187775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.312 [2024-05-15 13:47:17.187872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.187934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.312 [2024-05-15 13:47:17.188121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.188271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.312 [2024-05-15 13:47:17.188425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.188587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.312 [2024-05-15 13:47:17.188746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.188850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.312 [2024-05-15 13:47:17.188950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.189046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.312 [2024-05-15 13:47:17.189182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.312 [2024-05-15 13:47:17.189301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.313 [2024-05-15 13:47:17.189416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.189512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.313 [2024-05-15 13:47:17.189604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.189702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.313 [2024-05-15 13:47:17.189815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.189915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.313 [2024-05-15 13:47:17.190032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.313 [2024-05-15 13:47:17.190058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.313 [2024-05-15 13:47:17.190082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.313 [2024-05-15 13:47:17.190104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.313 [2024-05-15 13:47:17.190126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.313 [2024-05-15 13:47:17.190148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.313 [2024-05-15 13:47:17.190171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.313 [2024-05-15 13:47:17.190193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.313 [2024-05-15 13:47:17.190215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.313 [2024-05-15 13:47:17.190250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.313 [2024-05-15 13:47:17.190273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.313 [2024-05-15 13:47:17.190296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.313 [2024-05-15 13:47:17.190318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.313 [2024-05-15 13:47:17.190342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.313 [2024-05-15 13:47:17.190364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.313 [2024-05-15 13:47:17.190386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.313 [2024-05-15 13:47:17.190408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.313 [2024-05-15 13:47:17.190430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.313 [2024-05-15 13:47:17.190452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.313 [2024-05-15 13:47:17.190474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.313 [2024-05-15 13:47:17.190496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.313 [2024-05-15 13:47:17.190517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.313 [2024-05-15 13:47:17.190539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.313 [2024-05-15 13:47:17.190562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.313 [2024-05-15 13:47:17.190585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.313 [2024-05-15 13:47:17.190607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.313 [2024-05-15 13:47:17.190628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.313 [2024-05-15 13:47:17.190652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.313 [2024-05-15 13:47:17.190674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.313 [2024-05-15 13:47:17.190696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.313 [2024-05-15 13:47:17.190718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.313 [2024-05-15 13:47:17.190740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.313 [2024-05-15 13:47:17.190768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.314 [2024-05-15 13:47:17.190778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.314 [2024-05-15 13:47:17.190790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.314 [2024-05-15 13:47:17.190801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.314 [2024-05-15 13:47:17.190813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.314 [2024-05-15 13:47:17.190823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.314 [2024-05-15 13:47:17.190836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.314 [2024-05-15 13:47:17.190846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.314 [2024-05-15 13:47:17.190858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.314 [2024-05-15 13:47:17.190880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.314 [2024-05-15 13:47:17.190892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.314 [2024-05-15 13:47:17.190902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.314 [2024-05-15 13:47:17.190914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.314 [2024-05-15 13:47:17.190924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.314 [2024-05-15 13:47:17.190936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.314 [2024-05-15 13:47:17.190947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.314 [2024-05-15 13:47:17.190959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.314 [2024-05-15 13:47:17.190969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.314 [2024-05-15 13:47:17.190981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.314 [2024-05-15 13:47:17.190991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.314 [2024-05-15 13:47:17.191003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.314 [2024-05-15 13:47:17.191013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.314 [2024-05-15 13:47:17.191025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.314 [2024-05-15 13:47:17.191035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.314 [2024-05-15 13:47:17.191047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.314 [2024-05-15 13:47:17.191057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.314 [2024-05-15 13:47:17.191069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.314 [2024-05-15 13:47:17.191079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.314 [2024-05-15 13:47:17.191091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.314 [2024-05-15 13:47:17.191101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.314 [2024-05-15 13:47:17.191113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.314 [2024-05-15 13:47:17.191123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.314 [2024-05-15 13:47:17.191134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.314 [2024-05-15 13:47:17.191145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.314 [2024-05-15 13:47:17.191157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.314 [2024-05-15 13:47:17.191167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.314 [2024-05-15 13:47:17.191179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.314 [2024-05-15 13:47:17.191189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.314 [2024-05-15 13:47:17.191200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.314 [2024-05-15 13:47:17.191211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.314 [2024-05-15 13:47:17.191222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.314 [2024-05-15 13:47:17.191232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.314 [2024-05-15 13:47:17.191245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.314 [2024-05-15 13:47:17.191255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.314 [2024-05-15 13:47:17.191275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:04.314 [2024-05-15 13:47:17.191286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.314 [2024-05-15 13:47:17.191338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:04.314 [2024-05-15 13:47:17.191352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:04.314 [2024-05-15 13:47:17.191369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80992 len:8 PRP1 0x0 PRP2 0x0 00:29:04.314 [2024-05-15 13:47:17.191386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:04.314 [2024-05-15 13:47:17.191456] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x85cb10 was disconnected and freed. reset controller. 00:29:04.314 [2024-05-15 13:47:17.191514] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83d910 (9): Bad file descriptor 00:29:04.314 [2024-05-15 13:47:17.191747] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.314 [2024-05-15 13:47:17.191853] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.314 [2024-05-15 13:47:17.191899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.314 [2024-05-15 13:47:17.191937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.314 [2024-05-15 13:47:17.191951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x83d910 with addr=10.0.0.2, port=4420 00:29:04.314 [2024-05-15 13:47:17.191963] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d910 is same with the state(5) to be set 00:29:04.314 [2024-05-15 13:47:17.191980] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83d910 (9): Bad file descriptor 00:29:04.314 [2024-05-15 13:47:17.191996] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.314 [2024-05-15 13:47:17.192007] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.314 [2024-05-15 13:47:17.192019] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.314 [2024-05-15 13:47:17.192037] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.315 [2024-05-15 13:47:17.192048] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.315 13:47:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:29:05.248 [2024-05-15 13:47:18.192185] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.248 [2024-05-15 13:47:18.192494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.248 [2024-05-15 13:47:18.192579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.248 [2024-05-15 13:47:18.192704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x83d910 with addr=10.0.0.2, port=4420 00:29:05.248 [2024-05-15 13:47:18.192823] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d910 is same with the state(5) to be set 00:29:05.248 [2024-05-15 13:47:18.192900] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83d910 (9): Bad file descriptor 00:29:05.248 [2024-05-15 13:47:18.193066] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.248 [2024-05-15 13:47:18.193121] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.248 [2024-05-15 13:47:18.193251] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.248 [2024-05-15 13:47:18.193310] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.248 [2024-05-15 13:47:18.193347] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.178 [2024-05-15 13:47:19.193622] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.179 [2024-05-15 13:47:19.193959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.179 [2024-05-15 13:47:19.194050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.179 [2024-05-15 13:47:19.194178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x83d910 with addr=10.0.0.2, port=4420 00:29:06.179 [2024-05-15 13:47:19.194328] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d910 is same with the state(5) to be set 00:29:06.179 [2024-05-15 13:47:19.194499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83d910 (9): Bad file descriptor 00:29:06.179 [2024-05-15 13:47:19.194571] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.179 [2024-05-15 13:47:19.194677] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.179 [2024-05-15 13:47:19.194734] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.179 [2024-05-15 13:47:19.194833] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.179 [2024-05-15 13:47:19.194874] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.112 [2024-05-15 13:47:20.196007] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.112 [2024-05-15 13:47:20.196331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.112 [2024-05-15 13:47:20.196418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.112 [2024-05-15 13:47:20.196529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x83d910 with addr=10.0.0.2, port=4420 00:29:07.112 [2024-05-15 13:47:20.196596] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83d910 is same with the state(5) to be set 00:29:07.112 [2024-05-15 13:47:20.196947] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83d910 (9): Bad file descriptor 00:29:07.112 [2024-05-15 13:47:20.197303] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.112 [2024-05-15 13:47:20.197430] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.112 [2024-05-15 13:47:20.197547] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.112 [2024-05-15 13:47:20.201121] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.112 [2024-05-15 13:47:20.201293] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.370 13:47:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:07.370 [2024-05-15 13:47:20.440654] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.370 13:47:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 97672 00:29:08.301 [2024-05-15 13:47:21.238400] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:13.567 00:29:13.567 Latency(us) 00:29:13.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.567 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:29:13.567 Verification LBA range: start 0x0 length 0x4000 00:29:13.567 NVMe0n1 : 10.01 5648.53 22.06 4162.38 0.00 13025.81 589.04 3035877.18 00:29:13.567 =================================================================================================================== 00:29:13.567 Total : 5648.53 22.06 4162.38 0.00 13025.81 0.00 3035877.18 00:29:13.567 0 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 97545 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 97545 ']' 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 97545 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 97545 00:29:13.567 killing process with pid 97545 00:29:13.567 Received shutdown signal, test time was about 10.000000 seconds 00:29:13.567 00:29:13.567 Latency(us) 00:29:13.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.567 =================================================================================================================== 00:29:13.567 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 97545' 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 97545 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 97545 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=97782 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 97782 /var/tmp/bdevperf.sock 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 97782 ']' 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:13.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:13.567 [2024-05-15 13:47:26.285775] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:29:13.567 [2024-05-15 13:47:26.286110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97782 ] 00:29:13.567 [2024-05-15 13:47:26.410781] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:13.567 [2024-05-15 13:47:26.428179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.567 [2024-05-15 13:47:26.487083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=97791 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:29:13.567 13:47:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97782 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:29:13.824 13:47:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:29:14.083 NVMe0n1 00:29:14.083 13:47:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=97832 00:29:14.083 13:47:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:14.083 13:47:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:29:14.340 Running I/O for 10 seconds... 00:29:15.274 13:47:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:15.274 [2024-05-15 13:47:28.362624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.362995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.363139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.363277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.363431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.363603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.363788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.363899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.363970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.364101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.364214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.364347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.364526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.364713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.364881] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.365043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.365231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.365412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.365603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.365763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.365948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.366114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.366297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.366393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.366509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.366612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.366679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.366794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.366933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.367068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.367196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.367329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.367437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.367609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.367781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.367957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.368142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.368278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.368452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.368636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.368783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.368978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.369137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.369333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.369467] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.369565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.369628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.369738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.369823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.369918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.370063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.370174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.370255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.370406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.370517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.370622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.370822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.371002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with [2024-05-15 13:47:28.371101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.371145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.371169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.371188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.371206] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.371226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.371261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.371281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.371301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.371321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.371343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.371363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.371384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.371412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.371430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.274 [2024-05-15 13:47:28.371455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371473] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371640] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371809] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371935] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.371992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.372003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.372015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.372027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.372038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.372049] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.372061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.372074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.372085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.372097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13eff30 is same with the state(5) to be set 00:29:15.275 id:0 cdw10:00000000 cdw11:00000000 00:29:15.275 [2024-05-15 13:47:28.372281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.275 [2024-05-15 13:47:28.372360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.275 [2024-05-15 13:47:28.372473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.275 [2024-05-15 13:47:28.372564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.275 [2024-05-15 13:47:28.372760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.275 [2024-05-15 13:47:28.372864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:15.275 [2024-05-15 13:47:28.372963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.275 [2024-05-15 13:47:28.373028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19567d0 is same with the state(5) to be set 00:29:15.275 [2024-05-15 13:47:28.373212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.275 [2024-05-15 13:47:28.373359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.373518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.373687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.373837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.373935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.374046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.374200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.374340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.374558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.374681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.374792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.374897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.374990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.375089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.375192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.375333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.375461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.375563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:35528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.375714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.375884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.376030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.376047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:127520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.376058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.376073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.376085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.376098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.376109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.376122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:34352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.376132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.376145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.376155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.376168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.376179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.376192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.376203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.376215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:121912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.376225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.376248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.376260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.376272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.376283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.376296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.376307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.376320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.376331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.376346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.376356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.376369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.376379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.376393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.376404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.376416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.376427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.376440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.376450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.376463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.376475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.376487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.376498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.538 [2024-05-15 13:47:28.376511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.538 [2024-05-15 13:47:28.376522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.376535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.376545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.376558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.376569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.376582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.376592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.376605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.376615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.376628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.376640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.376653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.376663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.376676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.376686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.376699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.376710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.376723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.376734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.376746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.376757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.376770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.376780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.376793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.376803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.376816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:32232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.376826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.376840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.376850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.376863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:32976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.376873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.376886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.376896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.376908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:35424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.376919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.376931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.376943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.376955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.376966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.376978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.376989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.377001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.377012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.377024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.377035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.377047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.377057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.377070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.377080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.377092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:58520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.377103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.377115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:43032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.377126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.377138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:87000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.377149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.377161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.377172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.377184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.377195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.377208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.377219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.377232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.377255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.377268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.377279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.377292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.377303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.377316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:119360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.377327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.377340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.377351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.377363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.377374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.377386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.377397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.377410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.377420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.377433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.377444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.377456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.377467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.539 [2024-05-15 13:47:28.377479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.539 [2024-05-15 13:47:28.377490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.377502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.377513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.377526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.377539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.377551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.377562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.377575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.377586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.377598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.377609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.377621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.377632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.377644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.377655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.377667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.377677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.377690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.377700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.377713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.377724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.377736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.377747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.377760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.377770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.377783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.377808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.377821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.377831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.377845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.377856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.377869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.377880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.377893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.377903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.377916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.377927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.377940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.377950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.377963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.377973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.377986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.377997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.378009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.378020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.378032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.378043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.378056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.378066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.378078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.378089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.378101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.378112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.378124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.378135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.378148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.378158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.378171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:123768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.378182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.378194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.378205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.378218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.378229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.378250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.378265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.378278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.378288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.378301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.378311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.378324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.378335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.378348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.378358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.378371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.378382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.378394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.378405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.378418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.378429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.378444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.378454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.378467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.540 [2024-05-15 13:47:28.378478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.540 [2024-05-15 13:47:28.378490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.541 [2024-05-15 13:47:28.378502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.541 [2024-05-15 13:47:28.378514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.541 [2024-05-15 13:47:28.378525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.541 [2024-05-15 13:47:28.378537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.541 [2024-05-15 13:47:28.378548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.541 [2024-05-15 13:47:28.378560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.541 [2024-05-15 13:47:28.378571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.541 [2024-05-15 13:47:28.378583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:69024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.541 [2024-05-15 13:47:28.378594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.541 [2024-05-15 13:47:28.378606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.541 [2024-05-15 13:47:28.378617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.541 [2024-05-15 13:47:28.378629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.541 [2024-05-15 13:47:28.378642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.541 [2024-05-15 13:47:28.378654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.541 [2024-05-15 13:47:28.378665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.541 [2024-05-15 13:47:28.378677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.541 [2024-05-15 13:47:28.378688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.541 [2024-05-15 13:47:28.378702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.541 [2024-05-15 13:47:28.378713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.541 [2024-05-15 13:47:28.378726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.541 [2024-05-15 13:47:28.378737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.541 [2024-05-15 13:47:28.378749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.541 [2024-05-15 13:47:28.378760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.541 [2024-05-15 13:47:28.378772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.541 [2024-05-15 13:47:28.378783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.541 [2024-05-15 13:47:28.378796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.541 [2024-05-15 13:47:28.378806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.541 [2024-05-15 13:47:28.378818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1951a00 is same with the state(5) to be set 00:29:15.541 [2024-05-15 13:47:28.378836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:15.541 [2024-05-15 13:47:28.378845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:15.541 [2024-05-15 13:47:28.378855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47320 len:8 PRP1 0x0 PRP2 0x0 00:29:15.541 [2024-05-15 13:47:28.378866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:15.541 [2024-05-15 13:47:28.378936] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1951a00 was disconnected and freed. reset controller. 00:29:15.541 [2024-05-15 13:47:28.379222] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.541 [2024-05-15 13:47:28.379262] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19567d0 (9): Bad file descriptor 00:29:15.541 [2024-05-15 13:47:28.379388] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.541 [2024-05-15 13:47:28.379456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.541 [2024-05-15 13:47:28.379495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:15.541 [2024-05-15 13:47:28.379510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19567d0 with addr=10.0.0.2, port=4420 00:29:15.541 [2024-05-15 13:47:28.379522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19567d0 is same with the state(5) to be set 00:29:15.541 [2024-05-15 13:47:28.379541] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19567d0 (9): Bad file descriptor 00:29:15.541 [2024-05-15 13:47:28.379565] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:15.541 [2024-05-15 13:47:28.379576] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:15.541 [2024-05-15 13:47:28.379588] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:15.541 [2024-05-15 13:47:28.379611] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:15.541 [2024-05-15 13:47:28.379621] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:15.541 13:47:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 97832 00:29:17.475 [2024-05-15 13:47:30.379888] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.475 [2024-05-15 13:47:30.380000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.475 [2024-05-15 13:47:30.380048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.476 [2024-05-15 13:47:30.380067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19567d0 with addr=10.0.0.2, port=4420 00:29:17.476 [2024-05-15 13:47:30.380087] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19567d0 is same with the state(5) to be set 00:29:17.476 [2024-05-15 13:47:30.380119] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19567d0 (9): Bad file descriptor 00:29:17.476 [2024-05-15 13:47:30.380157] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.476 [2024-05-15 13:47:30.380172] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.476 [2024-05-15 13:47:30.380189] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.476 [2024-05-15 13:47:30.380221] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.476 [2024-05-15 13:47:30.380609] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.375 [2024-05-15 13:47:32.380845] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.375 [2024-05-15 13:47:32.382682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.375 [2024-05-15 13:47:32.382860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.375 [2024-05-15 13:47:32.382914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19567d0 with addr=10.0.0.2, port=4420 00:29:19.375 [2024-05-15 13:47:32.383090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19567d0 is same with the state(5) to be set 00:29:19.375 [2024-05-15 13:47:32.383315] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19567d0 (9): Bad file descriptor 00:29:19.375 [2024-05-15 13:47:32.383493] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.375 [2024-05-15 13:47:32.383722] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.376 [2024-05-15 13:47:32.383859] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.376 [2024-05-15 13:47:32.383925] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.376 [2024-05-15 13:47:32.383969] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.963 [2024-05-15 13:47:34.384181] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.544 00:29:22.544 Latency(us) 00:29:22.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.544 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:29:22.544 NVMe0n1 : 8.16 2133.56 8.33 15.69 0.00 59466.32 1560.38 7030452.42 00:29:22.544 =================================================================================================================== 00:29:22.544 Total : 2133.56 8.33 15.69 0.00 59466.32 1560.38 7030452.42 00:29:22.544 0 00:29:22.544 13:47:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:22.544 Attaching 5 probes... 00:29:22.544 1249.572832: reset bdev controller NVMe0 00:29:22.544 1249.661715: reconnect bdev controller NVMe0 00:29:22.544 3250.083353: reconnect delay bdev controller NVMe0 00:29:22.544 3250.114150: reconnect bdev controller NVMe0 00:29:22.544 5251.060997: reconnect delay bdev controller NVMe0 00:29:22.544 5251.086583: reconnect bdev controller NVMe0 00:29:22.544 7254.503451: reconnect delay bdev controller NVMe0 00:29:22.544 7254.525514: reconnect bdev controller NVMe0 00:29:22.544 13:47:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:29:22.544 13:47:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:29:22.544 13:47:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 97791 00:29:22.544 13:47:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:22.544 13:47:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 97782 00:29:22.544 13:47:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 97782 ']' 00:29:22.544 13:47:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 97782 00:29:22.544 13:47:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:29:22.544 13:47:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:22.544 13:47:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 97782 00:29:22.544 13:47:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:29:22.544 13:47:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:29:22.544 13:47:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 97782' 00:29:22.544 killing process with pid 97782 00:29:22.544 13:47:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 97782 00:29:22.544 Received shutdown signal, test time was about 8.216564 seconds 00:29:22.544 00:29:22.544 Latency(us) 00:29:22.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.544 =================================================================================================================== 00:29:22.544 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:22.544 13:47:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 97782 00:29:22.544 13:47:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:22.801 13:47:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:29:22.801 13:47:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:29:22.801 13:47:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:22.801 13:47:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:29:22.801 13:47:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:22.801 13:47:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:29:22.801 13:47:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:22.801 13:47:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:22.801 rmmod nvme_tcp 00:29:23.060 rmmod nvme_fabrics 00:29:23.060 13:47:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:23.060 13:47:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:29:23.060 13:47:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:29:23.060 13:47:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 97365 ']' 00:29:23.060 13:47:35 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 97365 00:29:23.060 13:47:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 97365 ']' 00:29:23.060 13:47:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 97365 00:29:23.060 13:47:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:29:23.060 13:47:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:23.060 13:47:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 97365 00:29:23.060 13:47:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:23.060 13:47:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:23.060 13:47:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 97365' 00:29:23.060 killing process with pid 97365 00:29:23.060 13:47:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 97365 00:29:23.060 [2024-05-15 13:47:35.967670] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:23.060 13:47:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 97365 00:29:23.318 13:47:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:23.318 13:47:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:23.318 13:47:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:23.318 13:47:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:23.318 13:47:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:23.318 13:47:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.318 13:47:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:23.318 13:47:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.318 13:47:36 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:23.318 ************************************ 00:29:23.318 END TEST nvmf_timeout 00:29:23.318 ************************************ 00:29:23.318 00:29:23.318 real 0m45.244s 00:29:23.318 user 2m11.680s 00:29:23.318 sys 0m6.615s 00:29:23.318 13:47:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:23.318 13:47:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:23.318 13:47:36 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:29:23.318 13:47:36 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:29:23.318 13:47:36 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:23.318 13:47:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:23.318 13:47:36 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:29:23.318 00:29:23.318 real 14m37.111s 00:29:23.318 user 38m2.571s 00:29:23.318 sys 4m40.431s 00:29:23.318 ************************************ 00:29:23.318 END TEST nvmf_tcp 00:29:23.318 ************************************ 00:29:23.318 13:47:36 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:23.318 13:47:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:23.318 13:47:36 -- spdk/autotest.sh@284 -- # [[ 1 -eq 0 ]] 00:29:23.318 13:47:36 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:23.318 13:47:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:23.318 13:47:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:23.318 13:47:36 -- common/autotest_common.sh@10 -- # set +x 00:29:23.318 ************************************ 00:29:23.318 START TEST nvmf_dif 00:29:23.318 ************************************ 00:29:23.318 13:47:36 nvmf_dif -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:23.577 * Looking for test storage... 00:29:23.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:23.577 13:47:36 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:23.577 13:47:36 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.577 13:47:36 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.577 13:47:36 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.577 13:47:36 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.577 13:47:36 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.577 13:47:36 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.577 13:47:36 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:23.577 13:47:36 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:23.577 13:47:36 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:23.577 13:47:36 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:23.577 13:47:36 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:23.577 13:47:36 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:23.577 13:47:36 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.577 13:47:36 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:23.577 13:47:36 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:23.577 13:47:36 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:23.578 Cannot find device "nvmf_tgt_br" 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@155 -- # true 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:23.578 Cannot find device "nvmf_tgt_br2" 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@156 -- # true 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:23.578 Cannot find device "nvmf_tgt_br" 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@158 -- # true 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:23.578 Cannot find device "nvmf_tgt_br2" 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@159 -- # true 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:23.578 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@162 -- # true 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:23.578 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@163 -- # true 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:23.578 13:47:36 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:23.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:29:23.836 00:29:23.836 --- 10.0.0.2 ping statistics --- 00:29:23.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.836 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:23.836 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:23.836 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:29:23.836 00:29:23.836 --- 10.0.0.3 ping statistics --- 00:29:23.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.836 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:23.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:29:23.836 00:29:23.836 --- 10.0.0.1 ping statistics --- 00:29:23.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.836 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:23.836 13:47:36 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:24.402 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:24.403 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:24.403 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:24.403 13:47:37 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.403 13:47:37 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:24.403 13:47:37 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:24.403 13:47:37 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.403 13:47:37 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:24.403 13:47:37 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:24.403 13:47:37 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:24.403 13:47:37 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:24.403 13:47:37 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:24.403 13:47:37 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:24.403 13:47:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:24.403 13:47:37 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=98269 00:29:24.403 13:47:37 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 98269 00:29:24.403 13:47:37 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 98269 ']' 00:29:24.403 13:47:37 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:24.403 13:47:37 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.403 13:47:37 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:24.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.403 13:47:37 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.403 13:47:37 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:24.403 13:47:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:24.403 [2024-05-15 13:47:37.391060] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:29:24.403 [2024-05-15 13:47:37.391169] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.662 [2024-05-15 13:47:37.524914] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:24.662 [2024-05-15 13:47:37.545288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.662 [2024-05-15 13:47:37.601135] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.662 [2024-05-15 13:47:37.601201] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.662 [2024-05-15 13:47:37.601225] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.662 [2024-05-15 13:47:37.601260] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.662 [2024-05-15 13:47:37.601275] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.662 [2024-05-15 13:47:37.601323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.634 13:47:38 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:25.634 13:47:38 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:29:25.634 13:47:38 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:25.634 13:47:38 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:25.634 13:47:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:25.634 13:47:38 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.634 13:47:38 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:25.634 13:47:38 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:25.634 13:47:38 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.634 13:47:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:25.634 [2024-05-15 13:47:38.462294] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.634 13:47:38 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.634 13:47:38 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:25.634 13:47:38 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:25.634 13:47:38 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:25.634 13:47:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:25.635 ************************************ 00:29:25.635 START TEST fio_dif_1_default 00:29:25.635 ************************************ 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:25.635 bdev_null0 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:25.635 [2024-05-15 13:47:38.506236] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:25.635 [2024-05-15 13:47:38.506537] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:25.635 { 00:29:25.635 "params": { 00:29:25.635 "name": "Nvme$subsystem", 00:29:25.635 "trtype": "$TEST_TRANSPORT", 00:29:25.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.635 "adrfam": "ipv4", 00:29:25.635 "trsvcid": "$NVMF_PORT", 00:29:25.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.635 "hdgst": ${hdgst:-false}, 00:29:25.635 "ddgst": ${ddgst:-false} 00:29:25.635 }, 00:29:25.635 "method": "bdev_nvme_attach_controller" 00:29:25.635 } 00:29:25.635 EOF 00:29:25.635 )") 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:25.635 "params": { 00:29:25.635 "name": "Nvme0", 00:29:25.635 "trtype": "tcp", 00:29:25.635 "traddr": "10.0.0.2", 00:29:25.635 "adrfam": "ipv4", 00:29:25.635 "trsvcid": "4420", 00:29:25.635 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:25.635 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:25.635 "hdgst": false, 00:29:25.635 "ddgst": false 00:29:25.635 }, 00:29:25.635 "method": "bdev_nvme_attach_controller" 00:29:25.635 }' 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:25.635 13:47:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:25.635 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:25.635 fio-3.35 00:29:25.635 Starting 1 thread 00:29:37.869 00:29:37.869 filename0: (groupid=0, jobs=1): err= 0: pid=98336: Wed May 15 13:47:49 2024 00:29:37.869 read: IOPS=9375, BW=36.6MiB/s (38.4MB/s)(366MiB/10001msec) 00:29:37.869 slat (usec): min=4, max=249, avg= 7.79, stdev= 1.91 00:29:37.869 clat (usec): min=350, max=6445, avg=404.83, stdev=49.86 00:29:37.869 lat (usec): min=357, max=6477, avg=412.62, stdev=50.08 00:29:37.869 clat percentiles (usec): 00:29:37.869 | 1.00th=[ 363], 5.00th=[ 375], 10.00th=[ 379], 20.00th=[ 388], 00:29:37.869 | 30.00th=[ 392], 40.00th=[ 396], 50.00th=[ 404], 60.00th=[ 408], 00:29:37.869 | 70.00th=[ 412], 80.00th=[ 420], 90.00th=[ 429], 95.00th=[ 441], 00:29:37.869 | 99.00th=[ 515], 99.50th=[ 545], 99.90th=[ 586], 99.95th=[ 603], 00:29:37.869 | 99.99th=[ 1020] 00:29:37.869 bw ( KiB/s): min=35520, max=38784, per=100.00%, avg=37541.05, stdev=896.33, samples=19 00:29:37.869 iops : min= 8880, max= 9696, avg=9385.26, stdev=224.08, samples=19 00:29:37.869 lat (usec) : 500=98.60%, 750=1.38%, 1000=0.01% 00:29:37.869 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:29:37.869 cpu : usr=81.92%, sys=16.24%, ctx=52, majf=0, minf=0 00:29:37.869 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:37.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:37.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:37.869 issued rwts: total=93760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:37.869 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:37.869 00:29:37.869 Run status group 0 (all jobs): 00:29:37.869 READ: bw=36.6MiB/s (38.4MB/s), 36.6MiB/s-36.6MiB/s (38.4MB/s-38.4MB/s), io=366MiB (384MB), run=10001-10001msec 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.869 00:29:37.869 real 0m10.920s 00:29:37.869 user 0m8.739s 00:29:37.869 sys 0m1.919s 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:37.869 ************************************ 00:29:37.869 END TEST fio_dif_1_default 00:29:37.869 ************************************ 00:29:37.869 13:47:49 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:37.869 13:47:49 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:37.869 13:47:49 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:37.869 13:47:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:37.869 ************************************ 00:29:37.869 START TEST fio_dif_1_multi_subsystems 00:29:37.869 ************************************ 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.869 bdev_null0 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:37.869 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.870 [2024-05-15 13:47:49.478342] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.870 bdev_null1 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:37.870 { 00:29:37.870 "params": { 00:29:37.870 "name": "Nvme$subsystem", 00:29:37.870 "trtype": "$TEST_TRANSPORT", 00:29:37.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.870 "adrfam": "ipv4", 00:29:37.870 "trsvcid": "$NVMF_PORT", 00:29:37.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.870 "hdgst": ${hdgst:-false}, 00:29:37.870 "ddgst": ${ddgst:-false} 00:29:37.870 }, 00:29:37.870 "method": "bdev_nvme_attach_controller" 00:29:37.870 } 00:29:37.870 EOF 00:29:37.870 )") 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:37.870 { 00:29:37.870 "params": { 00:29:37.870 "name": "Nvme$subsystem", 00:29:37.870 "trtype": "$TEST_TRANSPORT", 00:29:37.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.870 "adrfam": "ipv4", 00:29:37.870 "trsvcid": "$NVMF_PORT", 00:29:37.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.870 "hdgst": ${hdgst:-false}, 00:29:37.870 "ddgst": ${ddgst:-false} 00:29:37.870 }, 00:29:37.870 "method": "bdev_nvme_attach_controller" 00:29:37.870 } 00:29:37.870 EOF 00:29:37.870 )") 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:37.870 "params": { 00:29:37.870 "name": "Nvme0", 00:29:37.870 "trtype": "tcp", 00:29:37.870 "traddr": "10.0.0.2", 00:29:37.870 "adrfam": "ipv4", 00:29:37.870 "trsvcid": "4420", 00:29:37.870 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:37.870 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:37.870 "hdgst": false, 00:29:37.870 "ddgst": false 00:29:37.870 }, 00:29:37.870 "method": "bdev_nvme_attach_controller" 00:29:37.870 },{ 00:29:37.870 "params": { 00:29:37.870 "name": "Nvme1", 00:29:37.870 "trtype": "tcp", 00:29:37.870 "traddr": "10.0.0.2", 00:29:37.870 "adrfam": "ipv4", 00:29:37.870 "trsvcid": "4420", 00:29:37.870 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:37.870 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:37.870 "hdgst": false, 00:29:37.870 "ddgst": false 00:29:37.870 }, 00:29:37.870 "method": "bdev_nvme_attach_controller" 00:29:37.870 }' 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:37.870 13:47:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:37.870 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:37.871 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:37.871 fio-3.35 00:29:37.871 Starting 2 threads 00:29:47.867 00:29:47.867 filename0: (groupid=0, jobs=1): err= 0: pid=98495: Wed May 15 13:48:00 2024 00:29:47.867 read: IOPS=5025, BW=19.6MiB/s (20.6MB/s)(196MiB/10001msec) 00:29:47.867 slat (usec): min=5, max=109, avg=14.18, stdev= 4.44 00:29:47.867 clat (usec): min=381, max=5887, avg=755.76, stdev=78.86 00:29:47.867 lat (usec): min=389, max=5906, avg=769.94, stdev=79.68 00:29:47.867 clat percentiles (usec): 00:29:47.867 | 1.00th=[ 424], 5.00th=[ 701], 10.00th=[ 709], 20.00th=[ 725], 00:29:47.867 | 30.00th=[ 734], 40.00th=[ 742], 50.00th=[ 750], 60.00th=[ 758], 00:29:47.867 | 70.00th=[ 775], 80.00th=[ 783], 90.00th=[ 799], 95.00th=[ 848], 00:29:47.867 | 99.00th=[ 955], 99.50th=[ 988], 99.90th=[ 1057], 99.95th=[ 1074], 00:29:47.867 | 99.99th=[ 1123] 00:29:47.867 bw ( KiB/s): min=18912, max=22944, per=50.43%, avg=20117.89, stdev=771.11, samples=19 00:29:47.867 iops : min= 4728, max= 5736, avg=5029.47, stdev=192.78, samples=19 00:29:47.867 lat (usec) : 500=1.58%, 750=44.26%, 1000=53.77% 00:29:47.867 lat (msec) : 2=0.38%, 10=0.01% 00:29:47.867 cpu : usr=88.08%, sys=10.53%, ctx=182, majf=0, minf=9 00:29:47.867 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:47.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.867 issued rwts: total=50264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.867 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:47.867 filename1: (groupid=0, jobs=1): err= 0: pid=98496: Wed May 15 13:48:00 2024 00:29:47.867 read: IOPS=4945, BW=19.3MiB/s (20.3MB/s)(193MiB/10001msec) 00:29:47.867 slat (nsec): min=4498, max=68983, avg=15283.35, stdev=5822.48 00:29:47.867 clat (usec): min=498, max=6022, avg=765.83, stdev=82.31 00:29:47.867 lat (usec): min=508, max=6051, avg=781.12, stdev=85.07 00:29:47.867 clat percentiles (usec): 00:29:47.867 | 1.00th=[ 652], 5.00th=[ 685], 10.00th=[ 701], 20.00th=[ 725], 00:29:47.867 | 30.00th=[ 742], 40.00th=[ 750], 50.00th=[ 758], 60.00th=[ 766], 00:29:47.867 | 70.00th=[ 775], 80.00th=[ 791], 90.00th=[ 816], 95.00th=[ 881], 00:29:47.867 | 99.00th=[ 1057], 99.50th=[ 1106], 99.90th=[ 1188], 99.95th=[ 1205], 00:29:47.867 | 99.99th=[ 1319] 00:29:47.867 bw ( KiB/s): min=17792, max=20512, per=49.59%, avg=19781.05, stdev=613.89, samples=19 00:29:47.867 iops : min= 4448, max= 5128, avg=4945.26, stdev=153.47, samples=19 00:29:47.867 lat (usec) : 500=0.01%, 750=41.69%, 1000=56.44% 00:29:47.867 lat (msec) : 2=1.86%, 10=0.01% 00:29:47.867 cpu : usr=88.21%, sys=10.46%, ctx=12, majf=0, minf=0 00:29:47.867 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:47.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.867 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.867 issued rwts: total=49464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.867 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:47.867 00:29:47.867 Run status group 0 (all jobs): 00:29:47.867 READ: bw=39.0MiB/s (40.8MB/s), 19.3MiB/s-19.6MiB/s (20.3MB/s-20.6MB/s), io=390MiB (408MB), run=10001-10001msec 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.867 00:29:47.867 real 0m11.062s 00:29:47.867 user 0m18.286s 00:29:47.867 sys 0m2.391s 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:47.867 ************************************ 00:29:47.867 END TEST fio_dif_1_multi_subsystems 00:29:47.867 ************************************ 00:29:47.867 13:48:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:47.867 13:48:00 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:29:47.867 13:48:00 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:47.867 13:48:00 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:47.867 13:48:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:47.867 ************************************ 00:29:47.867 START TEST fio_dif_rand_params 00:29:47.867 ************************************ 00:29:47.867 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:29:47.867 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:29:47.867 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:29:47.867 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:29:47.867 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:29:47.867 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:29:47.867 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:29:47.867 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:29:47.867 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:29:47.867 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:47.867 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:47.867 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:47.867 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:47.867 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:47.867 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.867 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:47.867 bdev_null0 00:29:47.867 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.867 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:47.867 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.867 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:47.867 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:47.868 [2024-05-15 13:48:00.586976] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:47.868 { 00:29:47.868 "params": { 00:29:47.868 "name": "Nvme$subsystem", 00:29:47.868 "trtype": "$TEST_TRANSPORT", 00:29:47.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:47.868 "adrfam": "ipv4", 00:29:47.868 "trsvcid": "$NVMF_PORT", 00:29:47.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:47.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:47.868 "hdgst": ${hdgst:-false}, 00:29:47.868 "ddgst": ${ddgst:-false} 00:29:47.868 }, 00:29:47.868 "method": "bdev_nvme_attach_controller" 00:29:47.868 } 00:29:47.868 EOF 00:29:47.868 )") 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:47.868 "params": { 00:29:47.868 "name": "Nvme0", 00:29:47.868 "trtype": "tcp", 00:29:47.868 "traddr": "10.0.0.2", 00:29:47.868 "adrfam": "ipv4", 00:29:47.868 "trsvcid": "4420", 00:29:47.868 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:47.868 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:47.868 "hdgst": false, 00:29:47.868 "ddgst": false 00:29:47.868 }, 00:29:47.868 "method": "bdev_nvme_attach_controller" 00:29:47.868 }' 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:47.868 13:48:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:47.868 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:47.868 ... 00:29:47.868 fio-3.35 00:29:47.868 Starting 3 threads 00:29:54.431 00:29:54.431 filename0: (groupid=0, jobs=1): err= 0: pid=98652: Wed May 15 13:48:06 2024 00:29:54.431 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(163MiB/5003msec) 00:29:54.431 slat (nsec): min=7406, max=58935, avg=22606.83, stdev=9639.49 00:29:54.431 clat (usec): min=11019, max=14175, avg=11467.97, stdev=451.68 00:29:54.431 lat (usec): min=11029, max=14206, avg=11490.58, stdev=452.38 00:29:54.431 clat percentiles (usec): 00:29:54.431 | 1.00th=[11076], 5.00th=[11207], 10.00th=[11207], 20.00th=[11207], 00:29:54.431 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11338], 60.00th=[11338], 00:29:54.431 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11731], 95.00th=[12780], 00:29:54.431 | 99.00th=[13304], 99.50th=[13435], 99.90th=[14222], 99.95th=[14222], 00:29:54.431 | 99.99th=[14222] 00:29:54.431 bw ( KiB/s): min=32256, max=33792, per=33.30%, avg=33254.40, stdev=518.36, samples=10 00:29:54.431 iops : min= 252, max= 264, avg=259.80, stdev= 4.05, samples=10 00:29:54.431 lat (msec) : 20=100.00% 00:29:54.431 cpu : usr=87.25%, sys=11.56%, ctx=10, majf=0, minf=9 00:29:54.431 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:54.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.431 issued rwts: total=1302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.431 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:54.431 filename0: (groupid=0, jobs=1): err= 0: pid=98653: Wed May 15 13:48:06 2024 00:29:54.431 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(163MiB/5003msec) 00:29:54.431 slat (nsec): min=4652, max=56044, avg=23169.09, stdev=10893.44 00:29:54.431 clat (usec): min=10940, max=14175, avg=11467.01, stdev=455.79 00:29:54.431 lat (usec): min=10955, max=14207, avg=11490.18, stdev=455.37 00:29:54.431 clat percentiles (usec): 00:29:54.431 | 1.00th=[11076], 5.00th=[11207], 10.00th=[11207], 20.00th=[11207], 00:29:54.431 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11338], 60.00th=[11338], 00:29:54.431 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11731], 95.00th=[12780], 00:29:54.431 | 99.00th=[13304], 99.50th=[13435], 99.90th=[14222], 99.95th=[14222], 00:29:54.431 | 99.99th=[14222] 00:29:54.431 bw ( KiB/s): min=32256, max=33792, per=33.32%, avg=33280.00, stdev=543.06, samples=9 00:29:54.431 iops : min= 252, max= 264, avg=260.00, stdev= 4.24, samples=9 00:29:54.431 lat (msec) : 20=100.00% 00:29:54.431 cpu : usr=88.78%, sys=10.20%, ctx=40, majf=0, minf=9 00:29:54.431 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:54.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.431 issued rwts: total=1302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.431 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:54.431 filename0: (groupid=0, jobs=1): err= 0: pid=98654: Wed May 15 13:48:06 2024 00:29:54.431 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(163MiB/5006msec) 00:29:54.431 slat (nsec): min=4980, max=59580, avg=22099.19, stdev=9485.78 00:29:54.431 clat (usec): min=11003, max=15459, avg=11476.21, stdev=490.82 00:29:54.431 lat (usec): min=11020, max=15493, avg=11498.31, stdev=491.21 00:29:54.431 clat percentiles (usec): 00:29:54.431 | 1.00th=[11076], 5.00th=[11207], 10.00th=[11207], 20.00th=[11207], 00:29:54.431 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11338], 60.00th=[11338], 00:29:54.431 | 70.00th=[11469], 80.00th=[11469], 90.00th=[11731], 95.00th=[12780], 00:29:54.431 | 99.00th=[13435], 99.50th=[13566], 99.90th=[15401], 99.95th=[15401], 00:29:54.431 | 99.99th=[15401] 00:29:54.431 bw ( KiB/s): min=32256, max=33792, per=33.30%, avg=33254.40, stdev=518.36, samples=10 00:29:54.431 iops : min= 252, max= 264, avg=259.80, stdev= 4.05, samples=10 00:29:54.431 lat (msec) : 20=100.00% 00:29:54.431 cpu : usr=86.01%, sys=12.49%, ctx=55, majf=0, minf=0 00:29:54.431 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:54.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:54.431 issued rwts: total=1302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:54.431 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:54.431 00:29:54.431 Run status group 0 (all jobs): 00:29:54.431 READ: bw=97.5MiB/s (102MB/s), 32.5MiB/s-32.5MiB/s (34.1MB/s-34.1MB/s), io=488MiB (512MB), run=5003-5006msec 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.431 bdev_null0 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.431 [2024-05-15 13:48:06.546778] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.431 bdev_null1 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.431 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.432 bdev_null2 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:54.432 { 00:29:54.432 "params": { 00:29:54.432 "name": "Nvme$subsystem", 00:29:54.432 "trtype": "$TEST_TRANSPORT", 00:29:54.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:54.432 "adrfam": "ipv4", 00:29:54.432 "trsvcid": "$NVMF_PORT", 00:29:54.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:54.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:54.432 "hdgst": ${hdgst:-false}, 00:29:54.432 "ddgst": ${ddgst:-false} 00:29:54.432 }, 00:29:54.432 "method": "bdev_nvme_attach_controller" 00:29:54.432 } 00:29:54.432 EOF 00:29:54.432 )") 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:54.432 { 00:29:54.432 "params": { 00:29:54.432 "name": "Nvme$subsystem", 00:29:54.432 "trtype": "$TEST_TRANSPORT", 00:29:54.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:54.432 "adrfam": "ipv4", 00:29:54.432 "trsvcid": "$NVMF_PORT", 00:29:54.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:54.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:54.432 "hdgst": ${hdgst:-false}, 00:29:54.432 "ddgst": ${ddgst:-false} 00:29:54.432 }, 00:29:54.432 "method": "bdev_nvme_attach_controller" 00:29:54.432 } 00:29:54.432 EOF 00:29:54.432 )") 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:54.432 { 00:29:54.432 "params": { 00:29:54.432 "name": "Nvme$subsystem", 00:29:54.432 "trtype": "$TEST_TRANSPORT", 00:29:54.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:54.432 "adrfam": "ipv4", 00:29:54.432 "trsvcid": "$NVMF_PORT", 00:29:54.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:54.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:54.432 "hdgst": ${hdgst:-false}, 00:29:54.432 "ddgst": ${ddgst:-false} 00:29:54.432 }, 00:29:54.432 "method": "bdev_nvme_attach_controller" 00:29:54.432 } 00:29:54.432 EOF 00:29:54.432 )") 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:54.432 "params": { 00:29:54.432 "name": "Nvme0", 00:29:54.432 "trtype": "tcp", 00:29:54.432 "traddr": "10.0.0.2", 00:29:54.432 "adrfam": "ipv4", 00:29:54.432 "trsvcid": "4420", 00:29:54.432 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:54.432 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:54.432 "hdgst": false, 00:29:54.432 "ddgst": false 00:29:54.432 }, 00:29:54.432 "method": "bdev_nvme_attach_controller" 00:29:54.432 },{ 00:29:54.432 "params": { 00:29:54.432 "name": "Nvme1", 00:29:54.432 "trtype": "tcp", 00:29:54.432 "traddr": "10.0.0.2", 00:29:54.432 "adrfam": "ipv4", 00:29:54.432 "trsvcid": "4420", 00:29:54.432 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:54.432 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:54.432 "hdgst": false, 00:29:54.432 "ddgst": false 00:29:54.432 }, 00:29:54.432 "method": "bdev_nvme_attach_controller" 00:29:54.432 },{ 00:29:54.432 "params": { 00:29:54.432 "name": "Nvme2", 00:29:54.432 "trtype": "tcp", 00:29:54.432 "traddr": "10.0.0.2", 00:29:54.432 "adrfam": "ipv4", 00:29:54.432 "trsvcid": "4420", 00:29:54.432 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:54.432 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:54.432 "hdgst": false, 00:29:54.432 "ddgst": false 00:29:54.432 }, 00:29:54.432 "method": "bdev_nvme_attach_controller" 00:29:54.432 }' 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:54.432 13:48:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:54.432 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:54.432 ... 00:29:54.432 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:54.432 ... 00:29:54.432 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:54.432 ... 00:29:54.432 fio-3.35 00:29:54.432 Starting 24 threads 00:30:06.653 00:30:06.653 filename0: (groupid=0, jobs=1): err= 0: pid=98745: Wed May 15 13:48:17 2024 00:30:06.653 read: IOPS=183, BW=734KiB/s (752kB/s)(7388KiB/10059msec) 00:30:06.653 slat (usec): min=4, max=18023, avg=26.41, stdev=419.12 00:30:06.653 clat (usec): min=514, max=263165, avg=86834.96, stdev=60739.03 00:30:06.653 lat (usec): min=524, max=263176, avg=86861.38, stdev=60736.39 00:30:06.653 clat percentiles (usec): 00:30:06.653 | 1.00th=[ 1614], 5.00th=[ 1893], 10.00th=[ 4424], 20.00th=[ 53740], 00:30:06.653 | 30.00th=[ 54264], 40.00th=[ 60556], 50.00th=[ 76022], 60.00th=[ 81265], 00:30:06.653 | 70.00th=[ 99091], 80.00th=[128451], 90.00th=[193987], 95.00th=[217056], 00:30:06.653 | 99.00th=[248513], 99.50th=[256902], 99.90th=[263193], 99.95th=[263193], 00:30:06.653 | 99.99th=[263193] 00:30:06.653 bw ( KiB/s): min= 256, max= 2368, per=4.90%, avg=734.80, stdev=480.39, samples=20 00:30:06.653 iops : min= 64, max= 592, avg=183.70, stdev=120.10, samples=20 00:30:06.653 lat (usec) : 750=0.22% 00:30:06.653 lat (msec) : 2=5.09%, 4=3.14%, 10=3.68%, 20=0.76%, 50=4.76% 00:30:06.653 lat (msec) : 100=53.33%, 250=28.05%, 500=0.97% 00:30:06.653 cpu : usr=39.61%, sys=3.21%, ctx=651, majf=0, minf=0 00:30:06.653 IO depths : 1=0.5%, 2=1.7%, 4=5.5%, 8=77.0%, 16=15.2%, 32=0.0%, >=64=0.0% 00:30:06.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.654 complete : 0=0.0%, 4=88.9%, 8=9.6%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.654 issued rwts: total=1847,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.654 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.654 filename0: (groupid=0, jobs=1): err= 0: pid=98746: Wed May 15 13:48:17 2024 00:30:06.654 read: IOPS=160, BW=642KiB/s (657kB/s)(6428KiB/10018msec) 00:30:06.654 slat (nsec): min=4335, max=44900, avg=14647.51, stdev=5625.62 00:30:06.654 clat (msec): min=18, max=533, avg=99.61, stdev=71.84 00:30:06.654 lat (msec): min=18, max=533, avg=99.63, stdev=71.84 00:30:06.654 clat percentiles (msec): 00:30:06.654 | 1.00th=[ 25], 5.00th=[ 41], 10.00th=[ 53], 20.00th=[ 54], 00:30:06.654 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 77], 60.00th=[ 82], 00:30:06.654 | 70.00th=[ 90], 80.00th=[ 129], 90.00th=[ 215], 95.00th=[ 243], 00:30:06.654 | 99.00th=[ 296], 99.50th=[ 451], 99.90th=[ 535], 99.95th=[ 535], 00:30:06.654 | 99.99th=[ 535] 00:30:06.654 bw ( KiB/s): min= 112, max= 1088, per=4.19%, avg=628.21, stdev=351.62, samples=19 00:30:06.654 iops : min= 28, max= 272, avg=157.05, stdev=87.90, samples=19 00:30:06.654 lat (msec) : 20=0.56%, 50=7.72%, 100=62.79%, 250=25.08%, 500=3.73% 00:30:06.654 lat (msec) : 750=0.12% 00:30:06.654 cpu : usr=30.36%, sys=2.77%, ctx=401, majf=0, minf=9 00:30:06.654 IO depths : 1=0.1%, 2=0.7%, 4=3.4%, 8=79.7%, 16=16.2%, 32=0.0%, >=64=0.0% 00:30:06.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.654 complete : 0=0.0%, 4=88.4%, 8=10.7%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.654 issued rwts: total=1607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.654 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.654 filename0: (groupid=0, jobs=1): err= 0: pid=98747: Wed May 15 13:48:17 2024 00:30:06.654 read: IOPS=151, BW=605KiB/s (619kB/s)(6072KiB/10040msec) 00:30:06.654 slat (nsec): min=7086, max=52034, avg=16243.52, stdev=7996.79 00:30:06.654 clat (msec): min=35, max=365, avg=105.61, stdev=65.37 00:30:06.654 lat (msec): min=35, max=365, avg=105.63, stdev=65.37 00:30:06.654 clat percentiles (msec): 00:30:06.654 | 1.00th=[ 40], 5.00th=[ 45], 10.00th=[ 52], 20.00th=[ 58], 00:30:06.654 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 83], 00:30:06.654 | 70.00th=[ 104], 80.00th=[ 184], 90.00th=[ 226], 95.00th=[ 234], 00:30:06.654 | 99.00th=[ 279], 99.50th=[ 279], 99.90th=[ 368], 99.95th=[ 368], 00:30:06.654 | 99.99th=[ 368] 00:30:06.654 bw ( KiB/s): min= 256, max= 1040, per=4.00%, avg=600.80, stdev=312.34, samples=20 00:30:06.654 iops : min= 64, max= 260, avg=150.20, stdev=78.08, samples=20 00:30:06.654 lat (msec) : 50=8.10%, 100=57.64%, 250=32.02%, 500=2.24% 00:30:06.654 cpu : usr=43.03%, sys=3.72%, ctx=803, majf=0, minf=9 00:30:06.654 IO depths : 1=0.1%, 2=1.9%, 4=7.8%, 8=74.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:30:06.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.654 complete : 0=0.0%, 4=90.0%, 8=8.1%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.654 issued rwts: total=1518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.654 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.654 filename0: (groupid=0, jobs=1): err= 0: pid=98748: Wed May 15 13:48:17 2024 00:30:06.654 read: IOPS=156, BW=625KiB/s (640kB/s)(6252KiB/10009msec) 00:30:06.654 slat (usec): min=4, max=18029, avg=32.80, stdev=522.31 00:30:06.654 clat (msec): min=14, max=531, avg=102.33, stdev=71.40 00:30:06.654 lat (msec): min=17, max=531, avg=102.37, stdev=71.38 00:30:06.654 clat percentiles (msec): 00:30:06.654 | 1.00th=[ 28], 5.00th=[ 34], 10.00th=[ 51], 20.00th=[ 54], 00:30:06.654 | 30.00th=[ 57], 40.00th=[ 71], 50.00th=[ 80], 60.00th=[ 82], 00:30:06.654 | 70.00th=[ 102], 80.00th=[ 163], 90.00th=[ 218], 95.00th=[ 243], 00:30:06.654 | 99.00th=[ 426], 99.50th=[ 426], 99.90th=[ 531], 99.95th=[ 531], 00:30:06.654 | 99.99th=[ 531] 00:30:06.654 bw ( KiB/s): min= 112, max= 1040, per=4.06%, avg=609.79, stdev=353.32, samples=19 00:30:06.654 iops : min= 28, max= 260, avg=152.42, stdev=88.30, samples=19 00:30:06.654 lat (msec) : 20=0.45%, 50=9.21%, 100=59.63%, 250=29.43%, 500=1.15% 00:30:06.654 lat (msec) : 750=0.13% 00:30:06.654 cpu : usr=30.31%, sys=2.83%, ctx=398, majf=0, minf=9 00:30:06.654 IO depths : 1=0.1%, 2=1.9%, 4=7.4%, 8=75.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:30:06.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.654 complete : 0=0.0%, 4=89.7%, 8=8.6%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.654 issued rwts: total=1563,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.654 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.654 filename0: (groupid=0, jobs=1): err= 0: pid=98749: Wed May 15 13:48:17 2024 00:30:06.654 read: IOPS=154, BW=618KiB/s (633kB/s)(6188KiB/10011msec) 00:30:06.654 slat (usec): min=4, max=460, avg=15.88, stdev=14.65 00:30:06.654 clat (msec): min=17, max=528, avg=103.44, stdev=72.13 00:30:06.654 lat (msec): min=17, max=528, avg=103.46, stdev=72.13 00:30:06.654 clat percentiles (msec): 00:30:06.654 | 1.00th=[ 21], 5.00th=[ 37], 10.00th=[ 47], 20.00th=[ 54], 00:30:06.654 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 77], 60.00th=[ 84], 00:30:06.654 | 70.00th=[ 104], 80.00th=[ 174], 90.00th=[ 224], 95.00th=[ 236], 00:30:06.654 | 99.00th=[ 435], 99.50th=[ 435], 99.90th=[ 531], 99.95th=[ 531], 00:30:06.654 | 99.99th=[ 531] 00:30:06.654 bw ( KiB/s): min= 112, max= 1144, per=4.02%, avg=602.79, stdev=355.93, samples=19 00:30:06.654 iops : min= 28, max= 286, avg=150.68, stdev=88.96, samples=19 00:30:06.654 lat (msec) : 20=0.84%, 50=11.96%, 100=56.50%, 250=28.38%, 500=2.20% 00:30:06.654 lat (msec) : 750=0.13% 00:30:06.654 cpu : usr=33.99%, sys=3.27%, ctx=493, majf=0, minf=9 00:30:06.654 IO depths : 1=0.1%, 2=2.0%, 4=8.1%, 8=74.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:30:06.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.654 complete : 0=0.0%, 4=89.9%, 8=8.3%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.654 issued rwts: total=1547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.654 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.654 filename0: (groupid=0, jobs=1): err= 0: pid=98750: Wed May 15 13:48:17 2024 00:30:06.654 read: IOPS=155, BW=624KiB/s (639kB/s)(6244KiB/10009msec) 00:30:06.654 slat (usec): min=4, max=19029, avg=38.59, stdev=662.73 00:30:06.654 clat (msec): min=17, max=531, avg=102.44, stdev=71.57 00:30:06.654 lat (msec): min=17, max=531, avg=102.48, stdev=71.58 00:30:06.654 clat percentiles (msec): 00:30:06.654 | 1.00th=[ 21], 5.00th=[ 34], 10.00th=[ 52], 20.00th=[ 54], 00:30:06.654 | 30.00th=[ 56], 40.00th=[ 64], 50.00th=[ 78], 60.00th=[ 82], 00:30:06.654 | 70.00th=[ 107], 80.00th=[ 163], 90.00th=[ 215], 95.00th=[ 241], 00:30:06.654 | 99.00th=[ 426], 99.50th=[ 426], 99.90th=[ 531], 99.95th=[ 531], 00:30:06.654 | 99.99th=[ 531] 00:30:06.654 bw ( KiB/s): min= 112, max= 1088, per=4.04%, avg=606.79, stdev=356.27, samples=19 00:30:06.654 iops : min= 28, max= 272, avg=151.68, stdev=89.05, samples=19 00:30:06.654 lat (msec) : 20=0.90%, 50=8.46%, 100=58.68%, 250=30.69%, 500=1.15% 00:30:06.654 lat (msec) : 750=0.13% 00:30:06.654 cpu : usr=30.48%, sys=3.00%, ctx=402, majf=0, minf=9 00:30:06.654 IO depths : 1=0.1%, 2=1.8%, 4=7.6%, 8=75.0%, 16=15.5%, 32=0.0%, >=64=0.0% 00:30:06.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.654 complete : 0=0.0%, 4=89.7%, 8=8.4%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.654 issued rwts: total=1561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.654 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.654 filename0: (groupid=0, jobs=1): err= 0: pid=98751: Wed May 15 13:48:17 2024 00:30:06.654 read: IOPS=146, BW=588KiB/s (602kB/s)(5904KiB/10043msec) 00:30:06.654 slat (usec): min=8, max=173, avg=14.46, stdev= 7.08 00:30:06.654 clat (msec): min=28, max=365, avg=108.50, stdev=66.92 00:30:06.654 lat (msec): min=28, max=365, avg=108.52, stdev=66.92 00:30:06.654 clat percentiles (msec): 00:30:06.654 | 1.00th=[ 32], 5.00th=[ 51], 10.00th=[ 54], 20.00th=[ 56], 00:30:06.654 | 30.00th=[ 68], 40.00th=[ 78], 50.00th=[ 82], 60.00th=[ 86], 00:30:06.654 | 70.00th=[ 108], 80.00th=[ 190], 90.00th=[ 220], 95.00th=[ 241], 00:30:06.654 | 99.00th=[ 321], 99.50th=[ 321], 99.90th=[ 368], 99.95th=[ 368], 00:30:06.654 | 99.99th=[ 368] 00:30:06.654 bw ( KiB/s): min= 256, max= 1008, per=3.90%, avg=584.00, stdev=298.67, samples=20 00:30:06.654 iops : min= 64, max= 252, avg=146.00, stdev=74.67, samples=20 00:30:06.654 lat (msec) : 50=4.81%, 100=59.82%, 250=32.93%, 500=2.44% 00:30:06.654 cpu : usr=30.37%, sys=2.58%, ctx=395, majf=0, minf=9 00:30:06.654 IO depths : 1=0.3%, 2=3.0%, 4=11.9%, 8=69.9%, 16=14.9%, 32=0.0%, >=64=0.0% 00:30:06.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.654 complete : 0=0.0%, 4=91.1%, 8=6.2%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.654 issued rwts: total=1476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.654 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.654 filename0: (groupid=0, jobs=1): err= 0: pid=98752: Wed May 15 13:48:17 2024 00:30:06.654 read: IOPS=154, BW=618KiB/s (633kB/s)(6188KiB/10017msec) 00:30:06.654 slat (usec): min=4, max=19032, avg=30.03, stdev=483.53 00:30:06.654 clat (msec): min=22, max=436, avg=103.32, stdev=70.69 00:30:06.654 lat (msec): min=22, max=436, avg=103.35, stdev=70.68 00:30:06.654 clat percentiles (msec): 00:30:06.654 | 1.00th=[ 30], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 53], 00:30:06.654 | 30.00th=[ 58], 40.00th=[ 69], 50.00th=[ 75], 60.00th=[ 82], 00:30:06.654 | 70.00th=[ 100], 80.00th=[ 169], 90.00th=[ 226], 95.00th=[ 243], 00:30:06.654 | 99.00th=[ 384], 99.50th=[ 384], 99.90th=[ 435], 99.95th=[ 435], 00:30:06.654 | 99.99th=[ 435] 00:30:06.654 bw ( KiB/s): min= 128, max= 1128, per=4.07%, avg=610.95, stdev=357.73, samples=19 00:30:06.654 iops : min= 32, max= 282, avg=152.74, stdev=89.43, samples=19 00:30:06.654 lat (msec) : 50=12.41%, 100=58.11%, 250=26.37%, 500=3.10% 00:30:06.654 cpu : usr=39.56%, sys=3.12%, ctx=472, majf=0, minf=9 00:30:06.654 IO depths : 1=0.1%, 2=1.8%, 4=7.2%, 8=75.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:30:06.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.654 complete : 0=0.0%, 4=89.7%, 8=8.7%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.654 issued rwts: total=1547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.654 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.654 filename1: (groupid=0, jobs=1): err= 0: pid=98753: Wed May 15 13:48:17 2024 00:30:06.654 read: IOPS=161, BW=647KiB/s (662kB/s)(6476KiB/10012msec) 00:30:06.654 slat (nsec): min=7651, max=53641, avg=16113.37, stdev=6817.46 00:30:06.654 clat (msec): min=16, max=379, avg=98.87, stdev=63.37 00:30:06.654 lat (msec): min=16, max=379, avg=98.89, stdev=63.37 00:30:06.654 clat percentiles (msec): 00:30:06.654 | 1.00th=[ 26], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 58], 00:30:06.654 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 77], 60.00th=[ 83], 00:30:06.654 | 70.00th=[ 100], 80.00th=[ 133], 90.00th=[ 211], 95.00th=[ 243], 00:30:06.654 | 99.00th=[ 288], 99.50th=[ 321], 99.90th=[ 380], 99.95th=[ 380], 00:30:06.655 | 99.99th=[ 380] 00:30:06.655 bw ( KiB/s): min= 128, max= 1064, per=4.22%, avg=633.11, stdev=328.87, samples=19 00:30:06.655 iops : min= 32, max= 266, avg=158.26, stdev=82.20, samples=19 00:30:06.655 lat (msec) : 20=0.99%, 50=13.03%, 100=56.95%, 250=25.94%, 500=3.09% 00:30:06.655 cpu : usr=37.03%, sys=3.04%, ctx=582, majf=0, minf=9 00:30:06.655 IO depths : 1=0.1%, 2=0.2%, 4=1.1%, 8=81.7%, 16=16.9%, 32=0.0%, >=64=0.0% 00:30:06.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.655 complete : 0=0.0%, 4=88.1%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.655 issued rwts: total=1619,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.655 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.655 filename1: (groupid=0, jobs=1): err= 0: pid=98754: Wed May 15 13:48:17 2024 00:30:06.655 read: IOPS=155, BW=624KiB/s (639kB/s)(6268KiB/10045msec) 00:30:06.655 slat (nsec): min=7976, max=53528, avg=17405.31, stdev=7467.66 00:30:06.655 clat (msec): min=20, max=330, avg=102.30, stdev=68.08 00:30:06.655 lat (msec): min=20, max=330, avg=102.31, stdev=68.08 00:30:06.655 clat percentiles (msec): 00:30:06.655 | 1.00th=[ 32], 5.00th=[ 42], 10.00th=[ 50], 20.00th=[ 54], 00:30:06.655 | 30.00th=[ 60], 40.00th=[ 68], 50.00th=[ 78], 60.00th=[ 82], 00:30:06.655 | 70.00th=[ 96], 80.00th=[ 167], 90.00th=[ 220], 95.00th=[ 247], 00:30:06.655 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 330], 99.95th=[ 330], 00:30:06.655 | 99.99th=[ 330] 00:30:06.655 bw ( KiB/s): min= 128, max= 1120, per=4.16%, avg=623.20, stdev=350.36, samples=20 00:30:06.655 iops : min= 32, max= 280, avg=155.80, stdev=87.59, samples=20 00:30:06.655 lat (msec) : 50=11.74%, 100=59.16%, 250=25.02%, 500=4.08% 00:30:06.655 cpu : usr=36.94%, sys=3.03%, ctx=447, majf=0, minf=9 00:30:06.655 IO depths : 1=0.1%, 2=1.4%, 4=5.6%, 8=76.9%, 16=16.0%, 32=0.0%, >=64=0.0% 00:30:06.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.655 complete : 0=0.0%, 4=89.2%, 8=9.6%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.655 issued rwts: total=1567,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.655 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.655 filename1: (groupid=0, jobs=1): err= 0: pid=98755: Wed May 15 13:48:17 2024 00:30:06.655 read: IOPS=150, BW=604KiB/s (618kB/s)(6056KiB/10028msec) 00:30:06.655 slat (usec): min=7, max=21030, avg=29.53, stdev=540.15 00:30:06.655 clat (msec): min=26, max=408, avg=105.66, stdev=71.58 00:30:06.655 lat (msec): min=26, max=409, avg=105.69, stdev=71.58 00:30:06.655 clat percentiles (msec): 00:30:06.655 | 1.00th=[ 30], 5.00th=[ 48], 10.00th=[ 54], 20.00th=[ 55], 00:30:06.655 | 30.00th=[ 59], 40.00th=[ 73], 50.00th=[ 79], 60.00th=[ 83], 00:30:06.655 | 70.00th=[ 105], 80.00th=[ 176], 90.00th=[ 224], 95.00th=[ 262], 00:30:06.655 | 99.00th=[ 321], 99.50th=[ 321], 99.90th=[ 409], 99.95th=[ 409], 00:30:06.655 | 99.99th=[ 409] 00:30:06.655 bw ( KiB/s): min= 128, max= 1064, per=4.00%, avg=599.35, stdev=333.74, samples=20 00:30:06.655 iops : min= 32, max= 266, avg=149.75, stdev=83.42, samples=20 00:30:06.655 lat (msec) : 50=6.14%, 100=61.76%, 250=26.95%, 500=5.15% 00:30:06.655 cpu : usr=32.53%, sys=2.96%, ctx=500, majf=0, minf=9 00:30:06.655 IO depths : 1=0.1%, 2=2.0%, 4=8.6%, 8=73.9%, 16=15.5%, 32=0.0%, >=64=0.0% 00:30:06.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.655 complete : 0=0.0%, 4=90.1%, 8=7.8%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.655 issued rwts: total=1514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.655 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.655 filename1: (groupid=0, jobs=1): err= 0: pid=98756: Wed May 15 13:48:17 2024 00:30:06.655 read: IOPS=163, BW=654KiB/s (670kB/s)(6584KiB/10070msec) 00:30:06.655 slat (nsec): min=4127, max=56747, avg=14546.69, stdev=6082.05 00:30:06.655 clat (usec): min=1819, max=304497, avg=97770.75, stdev=65204.83 00:30:06.655 lat (usec): min=1828, max=304553, avg=97785.29, stdev=65204.82 00:30:06.655 clat percentiles (msec): 00:30:06.655 | 1.00th=[ 5], 5.00th=[ 31], 10.00th=[ 47], 20.00th=[ 55], 00:30:06.655 | 30.00th=[ 60], 40.00th=[ 69], 50.00th=[ 79], 60.00th=[ 83], 00:30:06.655 | 70.00th=[ 100], 80.00th=[ 128], 90.00th=[ 228], 95.00th=[ 247], 00:30:06.655 | 99.00th=[ 284], 99.50th=[ 284], 99.90th=[ 305], 99.95th=[ 305], 00:30:06.655 | 99.99th=[ 305] 00:30:06.655 bw ( KiB/s): min= 255, max= 1064, per=4.35%, avg=651.95, stdev=311.08, samples=20 00:30:06.655 iops : min= 63, max= 266, avg=162.95, stdev=77.82, samples=20 00:30:06.655 lat (msec) : 2=0.30%, 4=0.67%, 10=1.94%, 50=10.87%, 100=56.99% 00:30:06.655 lat (msec) : 250=25.15%, 500=4.07% 00:30:06.655 cpu : usr=33.85%, sys=3.13%, ctx=480, majf=0, minf=9 00:30:06.655 IO depths : 1=0.1%, 2=1.2%, 4=5.1%, 8=77.6%, 16=15.9%, 32=0.0%, >=64=0.0% 00:30:06.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.655 complete : 0=0.0%, 4=89.1%, 8=9.6%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.655 issued rwts: total=1646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.655 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.655 filename1: (groupid=0, jobs=1): err= 0: pid=98757: Wed May 15 13:48:17 2024 00:30:06.655 read: IOPS=155, BW=623KiB/s (638kB/s)(6252KiB/10036msec) 00:30:06.655 slat (nsec): min=4592, max=54181, avg=19281.67, stdev=8637.28 00:30:06.655 clat (msec): min=27, max=335, avg=102.54, stdev=67.20 00:30:06.655 lat (msec): min=27, max=335, avg=102.56, stdev=67.20 00:30:06.655 clat percentiles (msec): 00:30:06.655 | 1.00th=[ 33], 5.00th=[ 43], 10.00th=[ 49], 20.00th=[ 54], 00:30:06.655 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 75], 60.00th=[ 81], 00:30:06.655 | 70.00th=[ 104], 80.00th=[ 190], 90.00th=[ 220], 95.00th=[ 239], 00:30:06.655 | 99.00th=[ 305], 99.50th=[ 305], 99.90th=[ 338], 99.95th=[ 338], 00:30:06.655 | 99.99th=[ 338] 00:30:06.655 bw ( KiB/s): min= 240, max= 1088, per=4.14%, avg=620.80, stdev=343.33, samples=20 00:30:06.655 iops : min= 60, max= 272, avg=155.20, stdev=85.83, samples=20 00:30:06.655 lat (msec) : 50=11.71%, 100=54.89%, 250=30.33%, 500=3.07% 00:30:06.655 cpu : usr=38.12%, sys=2.97%, ctx=491, majf=0, minf=9 00:30:06.655 IO depths : 1=0.1%, 2=1.7%, 4=7.2%, 8=75.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:30:06.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.655 complete : 0=0.0%, 4=89.6%, 8=8.7%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.655 issued rwts: total=1563,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.655 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.655 filename1: (groupid=0, jobs=1): err= 0: pid=98758: Wed May 15 13:48:17 2024 00:30:06.655 read: IOPS=154, BW=617KiB/s (631kB/s)(6184KiB/10028msec) 00:30:06.655 slat (nsec): min=4126, max=55144, avg=18986.09, stdev=9582.78 00:30:06.655 clat (msec): min=27, max=342, avg=103.60, stdev=64.54 00:30:06.655 lat (msec): min=27, max=342, avg=103.62, stdev=64.54 00:30:06.655 clat percentiles (msec): 00:30:06.655 | 1.00th=[ 33], 5.00th=[ 48], 10.00th=[ 54], 20.00th=[ 55], 00:30:06.655 | 30.00th=[ 61], 40.00th=[ 77], 50.00th=[ 82], 60.00th=[ 82], 00:30:06.655 | 70.00th=[ 106], 80.00th=[ 161], 90.00th=[ 218], 95.00th=[ 245], 00:30:06.655 | 99.00th=[ 284], 99.50th=[ 284], 99.90th=[ 342], 99.95th=[ 342], 00:30:06.655 | 99.99th=[ 342] 00:30:06.655 bw ( KiB/s): min= 240, max= 1040, per=4.10%, avg=614.40, stdev=321.58, samples=20 00:30:06.655 iops : min= 60, max= 260, avg=153.60, stdev=80.40, samples=20 00:30:06.655 lat (msec) : 50=6.40%, 100=61.84%, 250=27.23%, 500=4.53% 00:30:06.655 cpu : usr=33.74%, sys=2.63%, ctx=418, majf=0, minf=9 00:30:06.655 IO depths : 1=0.1%, 2=1.2%, 4=5.0%, 8=77.6%, 16=16.1%, 32=0.0%, >=64=0.0% 00:30:06.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.655 complete : 0=0.0%, 4=89.1%, 8=9.7%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.655 issued rwts: total=1546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.655 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.655 filename1: (groupid=0, jobs=1): err= 0: pid=98759: Wed May 15 13:48:17 2024 00:30:06.655 read: IOPS=149, BW=598KiB/s (612kB/s)(5996KiB/10027msec) 00:30:06.655 slat (usec): min=4, max=14080, avg=26.50, stdev=363.38 00:30:06.655 clat (msec): min=23, max=351, avg=106.78, stdev=69.65 00:30:06.655 lat (msec): min=23, max=351, avg=106.81, stdev=69.64 00:30:06.655 clat percentiles (msec): 00:30:06.655 | 1.00th=[ 31], 5.00th=[ 41], 10.00th=[ 50], 20.00th=[ 55], 00:30:06.655 | 30.00th=[ 61], 40.00th=[ 73], 50.00th=[ 81], 60.00th=[ 89], 00:30:06.655 | 70.00th=[ 104], 80.00th=[ 201], 90.00th=[ 226], 95.00th=[ 255], 00:30:06.655 | 99.00th=[ 292], 99.50th=[ 292], 99.90th=[ 351], 99.95th=[ 351], 00:30:06.655 | 99.99th=[ 351] 00:30:06.655 bw ( KiB/s): min= 144, max= 1072, per=3.97%, avg=595.60, stdev=339.31, samples=20 00:30:06.655 iops : min= 36, max= 268, avg=148.90, stdev=84.83, samples=20 00:30:06.655 lat (msec) : 50=10.94%, 100=57.24%, 250=25.55%, 500=6.27% 00:30:06.655 cpu : usr=41.92%, sys=3.51%, ctx=503, majf=0, minf=9 00:30:06.655 IO depths : 1=0.1%, 2=2.5%, 4=10.3%, 8=72.0%, 16=15.1%, 32=0.0%, >=64=0.0% 00:30:06.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.655 complete : 0=0.0%, 4=90.4%, 8=7.4%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.655 issued rwts: total=1499,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.655 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.655 filename1: (groupid=0, jobs=1): err= 0: pid=98760: Wed May 15 13:48:17 2024 00:30:06.655 read: IOPS=148, BW=596KiB/s (610kB/s)(5980KiB/10034msec) 00:30:06.655 slat (usec): min=4, max=18042, avg=32.16, stdev=513.68 00:30:06.655 clat (msec): min=25, max=338, avg=107.13, stdev=64.93 00:30:06.655 lat (msec): min=25, max=338, avg=107.16, stdev=64.93 00:30:06.655 clat percentiles (msec): 00:30:06.655 | 1.00th=[ 34], 5.00th=[ 52], 10.00th=[ 54], 20.00th=[ 57], 00:30:06.655 | 30.00th=[ 66], 40.00th=[ 75], 50.00th=[ 82], 60.00th=[ 85], 00:30:06.655 | 70.00th=[ 106], 80.00th=[ 188], 90.00th=[ 224], 95.00th=[ 243], 00:30:06.655 | 99.00th=[ 262], 99.50th=[ 262], 99.90th=[ 338], 99.95th=[ 338], 00:30:06.655 | 99.99th=[ 338] 00:30:06.655 bw ( KiB/s): min= 256, max= 1112, per=3.94%, avg=591.60, stdev=315.51, samples=20 00:30:06.655 iops : min= 64, max= 278, avg=147.90, stdev=78.88, samples=20 00:30:06.655 lat (msec) : 50=4.08%, 100=62.14%, 250=31.17%, 500=2.61% 00:30:06.655 cpu : usr=31.39%, sys=2.97%, ctx=441, majf=0, minf=9 00:30:06.655 IO depths : 1=0.1%, 2=2.6%, 4=10.8%, 8=71.4%, 16=15.1%, 32=0.0%, >=64=0.0% 00:30:06.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.655 complete : 0=0.0%, 4=90.7%, 8=6.7%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.655 issued rwts: total=1495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.655 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.655 filename2: (groupid=0, jobs=1): err= 0: pid=98761: Wed May 15 13:48:17 2024 00:30:06.655 read: IOPS=175, BW=703KiB/s (719kB/s)(7080KiB/10078msec) 00:30:06.655 slat (usec): min=3, max=10136, avg=20.56, stdev=240.68 00:30:06.655 clat (usec): min=556, max=260888, avg=90786.64, stdev=59681.16 00:30:06.655 lat (usec): min=564, max=260897, avg=90807.20, stdev=59682.80 00:30:06.655 clat percentiles (msec): 00:30:06.655 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 36], 20.00th=[ 54], 00:30:06.656 | 30.00th=[ 60], 40.00th=[ 66], 50.00th=[ 74], 60.00th=[ 82], 00:30:06.656 | 70.00th=[ 100], 80.00th=[ 129], 90.00th=[ 199], 95.00th=[ 226], 00:30:06.656 | 99.00th=[ 249], 99.50th=[ 259], 99.90th=[ 262], 99.95th=[ 262], 00:30:06.656 | 99.99th=[ 262] 00:30:06.656 bw ( KiB/s): min= 272, max= 1536, per=4.68%, avg=701.60, stdev=352.81, samples=20 00:30:06.656 iops : min= 68, max= 384, avg=175.40, stdev=88.20, samples=20 00:30:06.656 lat (usec) : 750=0.11% 00:30:06.656 lat (msec) : 2=0.11%, 4=3.39%, 10=2.71%, 20=1.81%, 50=8.08% 00:30:06.656 lat (msec) : 100=54.12%, 250=28.76%, 500=0.90% 00:30:06.656 cpu : usr=42.45%, sys=3.81%, ctx=733, majf=0, minf=0 00:30:06.656 IO depths : 1=0.3%, 2=1.5%, 4=5.3%, 8=77.3%, 16=15.6%, 32=0.0%, >=64=0.0% 00:30:06.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.656 complete : 0=0.0%, 4=88.9%, 8=9.9%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.656 issued rwts: total=1770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.656 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.656 filename2: (groupid=0, jobs=1): err= 0: pid=98762: Wed May 15 13:48:17 2024 00:30:06.656 read: IOPS=143, BW=573KiB/s (586kB/s)(5740KiB/10026msec) 00:30:06.656 slat (usec): min=4, max=18035, avg=26.68, stdev=475.78 00:30:06.656 clat (msec): min=28, max=352, avg=111.45, stdev=67.22 00:30:06.656 lat (msec): min=28, max=352, avg=111.47, stdev=67.26 00:30:06.656 clat percentiles (msec): 00:30:06.656 | 1.00th=[ 35], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 58], 00:30:06.656 | 30.00th=[ 77], 40.00th=[ 82], 50.00th=[ 82], 60.00th=[ 88], 00:30:06.656 | 70.00th=[ 108], 80.00th=[ 190], 90.00th=[ 218], 95.00th=[ 247], 00:30:06.656 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 355], 99.95th=[ 355], 00:30:06.656 | 99.99th=[ 355] 00:30:06.656 bw ( KiB/s): min= 256, max= 992, per=3.80%, avg=569.20, stdev=296.75, samples=20 00:30:06.656 iops : min= 64, max= 248, avg=142.30, stdev=74.19, samples=20 00:30:06.656 lat (msec) : 50=2.44%, 100=62.79%, 250=30.31%, 500=4.46% 00:30:06.656 cpu : usr=30.34%, sys=2.79%, ctx=393, majf=0, minf=9 00:30:06.656 IO depths : 1=0.2%, 2=2.9%, 4=12.4%, 8=69.8%, 16=14.7%, 32=0.0%, >=64=0.0% 00:30:06.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.656 complete : 0=0.0%, 4=91.1%, 8=6.0%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.656 issued rwts: total=1435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.656 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.656 filename2: (groupid=0, jobs=1): err= 0: pid=98763: Wed May 15 13:48:17 2024 00:30:06.656 read: IOPS=148, BW=594KiB/s (608kB/s)(5960KiB/10042msec) 00:30:06.656 slat (usec): min=5, max=18026, avg=25.85, stdev=466.68 00:30:06.656 clat (msec): min=25, max=348, avg=107.61, stdev=70.29 00:30:06.656 lat (msec): min=25, max=348, avg=107.63, stdev=70.31 00:30:06.656 clat percentiles (msec): 00:30:06.656 | 1.00th=[ 29], 5.00th=[ 51], 10.00th=[ 54], 20.00th=[ 56], 00:30:06.656 | 30.00th=[ 62], 40.00th=[ 77], 50.00th=[ 82], 60.00th=[ 87], 00:30:06.656 | 70.00th=[ 104], 80.00th=[ 190], 90.00th=[ 222], 95.00th=[ 245], 00:30:06.656 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 351], 99.95th=[ 351], 00:30:06.656 | 99.99th=[ 351] 00:30:06.656 bw ( KiB/s): min= 256, max= 1088, per=3.93%, avg=589.60, stdev=314.93, samples=20 00:30:06.656 iops : min= 64, max= 272, avg=147.40, stdev=78.73, samples=20 00:30:06.656 lat (msec) : 50=4.90%, 100=63.56%, 250=27.38%, 500=4.16% 00:30:06.656 cpu : usr=30.53%, sys=2.62%, ctx=397, majf=0, minf=9 00:30:06.656 IO depths : 1=0.1%, 2=3.4%, 4=13.5%, 8=68.3%, 16=14.8%, 32=0.0%, >=64=0.0% 00:30:06.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.656 complete : 0=0.0%, 4=91.5%, 8=5.5%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.656 issued rwts: total=1490,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.656 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.656 filename2: (groupid=0, jobs=1): err= 0: pid=98764: Wed May 15 13:48:17 2024 00:30:06.656 read: IOPS=163, BW=653KiB/s (669kB/s)(6552KiB/10034msec) 00:30:06.656 slat (usec): min=4, max=10164, avg=23.13, stdev=250.85 00:30:06.656 clat (msec): min=27, max=306, avg=97.78, stdev=61.88 00:30:06.656 lat (msec): min=27, max=306, avg=97.80, stdev=61.89 00:30:06.656 clat percentiles (msec): 00:30:06.656 | 1.00th=[ 37], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 57], 00:30:06.656 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 73], 60.00th=[ 81], 00:30:06.656 | 70.00th=[ 96], 80.00th=[ 132], 90.00th=[ 218], 95.00th=[ 234], 00:30:06.656 | 99.00th=[ 268], 99.50th=[ 309], 99.90th=[ 309], 99.95th=[ 309], 00:30:06.656 | 99.99th=[ 309] 00:30:06.656 bw ( KiB/s): min= 240, max= 1072, per=4.33%, avg=648.80, stdev=333.15, samples=20 00:30:06.656 iops : min= 60, max= 268, avg=162.20, stdev=83.29, samples=20 00:30:06.656 lat (msec) : 50=13.61%, 100=57.14%, 250=27.84%, 500=1.40% 00:30:06.656 cpu : usr=41.86%, sys=3.90%, ctx=622, majf=0, minf=9 00:30:06.656 IO depths : 1=0.1%, 2=0.5%, 4=2.1%, 8=80.9%, 16=16.4%, 32=0.0%, >=64=0.0% 00:30:06.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.656 complete : 0=0.0%, 4=88.0%, 8=11.4%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.656 issued rwts: total=1638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.656 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.656 filename2: (groupid=0, jobs=1): err= 0: pid=98765: Wed May 15 13:48:17 2024 00:30:06.656 read: IOPS=155, BW=623KiB/s (638kB/s)(6240KiB/10013msec) 00:30:06.656 slat (usec): min=4, max=18043, avg=28.05, stdev=456.49 00:30:06.656 clat (msec): min=16, max=441, avg=102.53, stdev=70.56 00:30:06.656 lat (msec): min=16, max=441, avg=102.56, stdev=70.55 00:30:06.656 clat percentiles (msec): 00:30:06.656 | 1.00th=[ 28], 5.00th=[ 47], 10.00th=[ 54], 20.00th=[ 54], 00:30:06.656 | 30.00th=[ 57], 40.00th=[ 69], 50.00th=[ 79], 60.00th=[ 82], 00:30:06.656 | 70.00th=[ 97], 80.00th=[ 174], 90.00th=[ 218], 95.00th=[ 241], 00:30:06.656 | 99.00th=[ 388], 99.50th=[ 388], 99.90th=[ 443], 99.95th=[ 443], 00:30:06.656 | 99.99th=[ 443] 00:30:06.656 bw ( KiB/s): min= 128, max= 1112, per=4.04%, avg=605.89, stdev=350.25, samples=19 00:30:06.656 iops : min= 32, max= 278, avg=151.47, stdev=87.56, samples=19 00:30:06.656 lat (msec) : 20=0.96%, 50=5.64%, 100=64.81%, 250=26.41%, 500=2.18% 00:30:06.656 cpu : usr=35.00%, sys=2.88%, ctx=560, majf=0, minf=9 00:30:06.656 IO depths : 1=0.1%, 2=1.5%, 4=6.3%, 8=76.3%, 16=15.8%, 32=0.0%, >=64=0.0% 00:30:06.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.656 complete : 0=0.0%, 4=89.4%, 8=9.0%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.656 issued rwts: total=1560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.656 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.656 filename2: (groupid=0, jobs=1): err= 0: pid=98766: Wed May 15 13:48:17 2024 00:30:06.656 read: IOPS=159, BW=638KiB/s (654kB/s)(6396KiB/10019msec) 00:30:06.656 slat (usec): min=7, max=15028, avg=24.96, stdev=375.50 00:30:06.656 clat (msec): min=13, max=438, avg=100.09, stdev=66.58 00:30:06.656 lat (msec): min=13, max=438, avg=100.12, stdev=66.59 00:30:06.656 clat percentiles (msec): 00:30:06.656 | 1.00th=[ 26], 5.00th=[ 35], 10.00th=[ 53], 20.00th=[ 55], 00:30:06.656 | 30.00th=[ 58], 40.00th=[ 74], 50.00th=[ 79], 60.00th=[ 82], 00:30:06.656 | 70.00th=[ 102], 80.00th=[ 133], 90.00th=[ 213], 95.00th=[ 243], 00:30:06.656 | 99.00th=[ 384], 99.50th=[ 384], 99.90th=[ 439], 99.95th=[ 439], 00:30:06.656 | 99.99th=[ 439] 00:30:06.656 bw ( KiB/s): min= 128, max= 1056, per=4.16%, avg=623.58, stdev=335.52, samples=19 00:30:06.656 iops : min= 32, max= 264, avg=155.89, stdev=83.88, samples=19 00:30:06.656 lat (msec) : 20=0.19%, 50=7.32%, 100=61.98%, 250=26.33%, 500=4.19% 00:30:06.656 cpu : usr=33.12%, sys=2.65%, ctx=493, majf=0, minf=9 00:30:06.656 IO depths : 1=0.1%, 2=0.6%, 4=2.8%, 8=79.9%, 16=16.6%, 32=0.0%, >=64=0.0% 00:30:06.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.656 complete : 0=0.0%, 4=88.5%, 8=10.9%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.656 issued rwts: total=1599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.656 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.656 filename2: (groupid=0, jobs=1): err= 0: pid=98767: Wed May 15 13:48:17 2024 00:30:06.656 read: IOPS=157, BW=632KiB/s (647kB/s)(6336KiB/10032msec) 00:30:06.656 slat (usec): min=7, max=10153, avg=24.76, stdev=254.81 00:30:06.656 clat (msec): min=23, max=312, avg=101.14, stdev=63.09 00:30:06.656 lat (msec): min=23, max=312, avg=101.17, stdev=63.09 00:30:06.656 clat percentiles (msec): 00:30:06.656 | 1.00th=[ 32], 5.00th=[ 44], 10.00th=[ 52], 20.00th=[ 56], 00:30:06.656 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 78], 60.00th=[ 84], 00:30:06.656 | 70.00th=[ 97], 80.00th=[ 142], 90.00th=[ 224], 95.00th=[ 243], 00:30:06.656 | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 313], 99.95th=[ 313], 00:30:06.656 | 99.99th=[ 313] 00:30:06.656 bw ( KiB/s): min= 144, max= 1072, per=4.19%, avg=628.80, stdev=326.17, samples=20 00:30:06.656 iops : min= 36, max= 268, avg=157.20, stdev=81.54, samples=20 00:30:06.656 lat (msec) : 50=8.02%, 100=62.44%, 250=25.38%, 500=4.17% 00:30:06.656 cpu : usr=40.43%, sys=3.55%, ctx=473, majf=0, minf=9 00:30:06.656 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=79.9%, 16=16.5%, 32=0.0%, >=64=0.0% 00:30:06.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.656 complete : 0=0.0%, 4=88.6%, 8=10.7%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.656 issued rwts: total=1584,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.656 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.656 filename2: (groupid=0, jobs=1): err= 0: pid=98768: Wed May 15 13:48:17 2024 00:30:06.656 read: IOPS=155, BW=621KiB/s (636kB/s)(6228KiB/10027msec) 00:30:06.656 slat (usec): min=4, max=18036, avg=32.92, stdev=523.93 00:30:06.656 clat (msec): min=29, max=313, avg=102.80, stdev=65.14 00:30:06.656 lat (msec): min=29, max=313, avg=102.83, stdev=65.16 00:30:06.656 clat percentiles (msec): 00:30:06.656 | 1.00th=[ 35], 5.00th=[ 43], 10.00th=[ 50], 20.00th=[ 59], 00:30:06.656 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 75], 60.00th=[ 85], 00:30:06.656 | 70.00th=[ 92], 80.00th=[ 174], 90.00th=[ 224], 95.00th=[ 245], 00:30:06.656 | 99.00th=[ 259], 99.50th=[ 259], 99.90th=[ 313], 99.95th=[ 313], 00:30:06.656 | 99.99th=[ 313] 00:30:06.656 bw ( KiB/s): min= 256, max= 1064, per=4.11%, avg=616.40, stdev=334.02, samples=20 00:30:06.656 iops : min= 64, max= 266, avg=154.10, stdev=83.51, samples=20 00:30:06.656 lat (msec) : 50=10.60%, 100=61.53%, 250=25.56%, 500=2.31% 00:30:06.656 cpu : usr=40.24%, sys=3.51%, ctx=860, majf=0, minf=9 00:30:06.656 IO depths : 1=0.1%, 2=2.1%, 4=8.7%, 8=73.9%, 16=15.3%, 32=0.0%, >=64=0.0% 00:30:06.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.656 complete : 0=0.0%, 4=90.0%, 8=8.0%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.656 issued rwts: total=1557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.656 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:06.656 00:30:06.656 Run status group 0 (all jobs): 00:30:06.656 READ: bw=14.6MiB/s (15.3MB/s), 573KiB/s-734KiB/s (586kB/s-752kB/s), io=147MiB (155MB), run=10009-10078msec 00:30:06.656 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:06.656 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:06.657 bdev_null0 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:06.657 [2024-05-15 13:48:17.934605] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:06.657 bdev_null1 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:06.657 { 00:30:06.657 "params": { 00:30:06.657 "name": "Nvme$subsystem", 00:30:06.657 "trtype": "$TEST_TRANSPORT", 00:30:06.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.657 "adrfam": "ipv4", 00:30:06.657 "trsvcid": "$NVMF_PORT", 00:30:06.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.657 "hdgst": ${hdgst:-false}, 00:30:06.657 "ddgst": ${ddgst:-false} 00:30:06.657 }, 00:30:06.657 "method": "bdev_nvme_attach_controller" 00:30:06.657 } 00:30:06.657 EOF 00:30:06.657 )") 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:06.657 { 00:30:06.657 "params": { 00:30:06.657 "name": "Nvme$subsystem", 00:30:06.657 "trtype": "$TEST_TRANSPORT", 00:30:06.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:06.657 "adrfam": "ipv4", 00:30:06.657 "trsvcid": "$NVMF_PORT", 00:30:06.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:06.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:06.657 "hdgst": ${hdgst:-false}, 00:30:06.657 "ddgst": ${ddgst:-false} 00:30:06.657 }, 00:30:06.657 "method": "bdev_nvme_attach_controller" 00:30:06.657 } 00:30:06.657 EOF 00:30:06.657 )") 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:06.657 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:06.658 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:06.658 13:48:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:06.658 13:48:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:06.658 13:48:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:06.658 13:48:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:06.658 "params": { 00:30:06.658 "name": "Nvme0", 00:30:06.658 "trtype": "tcp", 00:30:06.658 "traddr": "10.0.0.2", 00:30:06.658 "adrfam": "ipv4", 00:30:06.658 "trsvcid": "4420", 00:30:06.658 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:06.658 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:06.658 "hdgst": false, 00:30:06.658 "ddgst": false 00:30:06.658 }, 00:30:06.658 "method": "bdev_nvme_attach_controller" 00:30:06.658 },{ 00:30:06.658 "params": { 00:30:06.658 "name": "Nvme1", 00:30:06.658 "trtype": "tcp", 00:30:06.658 "traddr": "10.0.0.2", 00:30:06.658 "adrfam": "ipv4", 00:30:06.658 "trsvcid": "4420", 00:30:06.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:06.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:06.658 "hdgst": false, 00:30:06.658 "ddgst": false 00:30:06.658 }, 00:30:06.658 "method": "bdev_nvme_attach_controller" 00:30:06.658 }' 00:30:06.658 13:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:06.658 13:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:06.658 13:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:06.658 13:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:06.658 13:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:06.658 13:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:06.658 13:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:06.658 13:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:06.658 13:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:06.658 13:48:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:06.658 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:06.658 ... 00:30:06.658 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:06.658 ... 00:30:06.658 fio-3.35 00:30:06.658 Starting 4 threads 00:30:10.842 00:30:10.842 filename0: (groupid=0, jobs=1): err= 0: pid=98895: Wed May 15 13:48:23 2024 00:30:10.842 read: IOPS=2200, BW=17.2MiB/s (18.0MB/s)(86.0MiB/5003msec) 00:30:10.842 slat (nsec): min=3689, max=76225, avg=15565.30, stdev=4614.76 00:30:10.842 clat (usec): min=1098, max=6317, avg=3580.83, stdev=660.18 00:30:10.842 lat (usec): min=1111, max=6331, avg=3596.40, stdev=660.87 00:30:10.842 clat percentiles (usec): 00:30:10.842 | 1.00th=[ 1450], 5.00th=[ 2180], 10.00th=[ 2409], 20.00th=[ 3425], 00:30:10.842 | 30.00th=[ 3621], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3851], 00:30:10.842 | 70.00th=[ 3916], 80.00th=[ 3982], 90.00th=[ 4146], 95.00th=[ 4293], 00:30:10.842 | 99.00th=[ 4621], 99.50th=[ 4752], 99.90th=[ 5276], 99.95th=[ 5407], 00:30:10.842 | 99.99th=[ 6194] 00:30:10.842 bw ( KiB/s): min=16256, max=19792, per=24.48%, avg=17708.44, stdev=1398.35, samples=9 00:30:10.842 iops : min= 2032, max= 2474, avg=2213.56, stdev=174.79, samples=9 00:30:10.842 lat (msec) : 2=3.84%, 4=79.53%, 10=16.62% 00:30:10.842 cpu : usr=89.94%, sys=9.02%, ctx=70, majf=0, minf=9 00:30:10.842 IO depths : 1=0.1%, 2=16.2%, 4=55.2%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:10.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.842 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.842 issued rwts: total=11009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.842 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:10.842 filename0: (groupid=0, jobs=1): err= 0: pid=98896: Wed May 15 13:48:23 2024 00:30:10.842 read: IOPS=2233, BW=17.4MiB/s (18.3MB/s)(87.3MiB/5002msec) 00:30:10.842 slat (nsec): min=4351, max=76903, avg=13765.75, stdev=4699.06 00:30:10.842 clat (usec): min=1019, max=6625, avg=3536.99, stdev=720.53 00:30:10.842 lat (usec): min=1028, max=6639, avg=3550.76, stdev=721.00 00:30:10.842 clat percentiles (usec): 00:30:10.842 | 1.00th=[ 1287], 5.00th=[ 1942], 10.00th=[ 2311], 20.00th=[ 3064], 00:30:10.842 | 30.00th=[ 3589], 40.00th=[ 3687], 50.00th=[ 3785], 60.00th=[ 3851], 00:30:10.842 | 70.00th=[ 3884], 80.00th=[ 3982], 90.00th=[ 4113], 95.00th=[ 4293], 00:30:10.842 | 99.00th=[ 4621], 99.50th=[ 4817], 99.90th=[ 5342], 99.95th=[ 6063], 00:30:10.842 | 99.99th=[ 6390] 00:30:10.842 bw ( KiB/s): min=16256, max=19392, per=24.25%, avg=17539.56, stdev=1212.17, samples=9 00:30:10.842 iops : min= 2032, max= 2424, avg=2192.44, stdev=151.52, samples=9 00:30:10.842 lat (msec) : 2=5.97%, 4=77.35%, 10=16.68% 00:30:10.842 cpu : usr=89.62%, sys=9.48%, ctx=59, majf=0, minf=9 00:30:10.842 IO depths : 1=0.1%, 2=15.0%, 4=55.8%, 8=29.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:10.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.842 complete : 0=0.0%, 4=94.2%, 8=5.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.842 issued rwts: total=11172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.842 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:10.842 filename1: (groupid=0, jobs=1): err= 0: pid=98897: Wed May 15 13:48:23 2024 00:30:10.842 read: IOPS=2109, BW=16.5MiB/s (17.3MB/s)(82.4MiB/5001msec) 00:30:10.842 slat (nsec): min=6678, max=50545, avg=15618.01, stdev=4591.75 00:30:10.842 clat (usec): min=1368, max=6552, avg=3735.18, stdev=565.06 00:30:10.842 lat (usec): min=1375, max=6570, avg=3750.79, stdev=565.08 00:30:10.842 clat percentiles (usec): 00:30:10.842 | 1.00th=[ 1876], 5.00th=[ 2311], 10.00th=[ 3064], 20.00th=[ 3589], 00:30:10.842 | 30.00th=[ 3687], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3884], 00:30:10.842 | 70.00th=[ 3949], 80.00th=[ 4015], 90.00th=[ 4359], 95.00th=[ 4490], 00:30:10.842 | 99.00th=[ 4686], 99.50th=[ 4817], 99.90th=[ 6063], 99.95th=[ 6456], 00:30:10.842 | 99.99th=[ 6521] 00:30:10.842 bw ( KiB/s): min=14464, max=18693, per=23.36%, avg=16896.56, stdev=1275.49, samples=9 00:30:10.842 iops : min= 1808, max= 2336, avg=2112.00, stdev=159.33, samples=9 00:30:10.842 lat (msec) : 2=2.43%, 4=76.91%, 10=20.66% 00:30:10.842 cpu : usr=89.72%, sys=9.40%, ctx=6, majf=0, minf=0 00:30:10.842 IO depths : 1=0.1%, 2=19.6%, 4=53.2%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:10.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.842 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.842 issued rwts: total=10550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.842 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:10.842 filename1: (groupid=0, jobs=1): err= 0: pid=98898: Wed May 15 13:48:23 2024 00:30:10.842 read: IOPS=2500, BW=19.5MiB/s (20.5MB/s)(97.7MiB/5001msec) 00:30:10.842 slat (nsec): min=6395, max=52163, avg=11408.21, stdev=4325.32 00:30:10.842 clat (usec): min=685, max=6321, avg=3167.10, stdev=1025.42 00:30:10.842 lat (usec): min=695, max=6336, avg=3178.51, stdev=1025.89 00:30:10.842 clat percentiles (usec): 00:30:10.842 | 1.00th=[ 1106], 5.00th=[ 1221], 10.00th=[ 1270], 20.00th=[ 2376], 00:30:10.842 | 30.00th=[ 2769], 40.00th=[ 3195], 50.00th=[ 3589], 60.00th=[ 3687], 00:30:10.842 | 70.00th=[ 3818], 80.00th=[ 3949], 90.00th=[ 4178], 95.00th=[ 4359], 00:30:10.843 | 99.00th=[ 4883], 99.50th=[ 5342], 99.90th=[ 5866], 99.95th=[ 5932], 00:30:10.843 | 99.99th=[ 6194] 00:30:10.843 bw ( KiB/s): min=15856, max=23136, per=28.15%, avg=20362.33, stdev=2595.81, samples=9 00:30:10.843 iops : min= 1982, max= 2892, avg=2545.22, stdev=324.56, samples=9 00:30:10.843 lat (usec) : 750=0.31%, 1000=0.21% 00:30:10.843 lat (msec) : 2=16.39%, 4=67.31%, 10=15.79% 00:30:10.843 cpu : usr=89.62%, sys=9.40%, ctx=87, majf=0, minf=9 00:30:10.843 IO depths : 1=0.1%, 2=6.3%, 4=60.5%, 8=33.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:10.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.843 complete : 0=0.0%, 4=97.7%, 8=2.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:10.843 issued rwts: total=12504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:10.843 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:10.843 00:30:10.843 Run status group 0 (all jobs): 00:30:10.843 READ: bw=70.6MiB/s (74.1MB/s), 16.5MiB/s-19.5MiB/s (17.3MB/s-20.5MB/s), io=353MiB (371MB), run=5001-5003msec 00:30:10.843 13:48:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:10.843 13:48:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:10.843 13:48:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:10.843 13:48:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:10.843 13:48:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:10.843 13:48:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:10.843 13:48:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.843 13:48:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.843 13:48:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.843 13:48:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:10.843 13:48:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.843 13:48:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.843 13:48:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.843 13:48:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:10.843 13:48:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:10.843 13:48:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:10.843 13:48:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:10.843 13:48:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.843 13:48:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:10.843 13:48:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:10.843 13:48:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:10.843 13:48:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:10.843 13:48:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:11.102 13:48:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.102 00:30:11.102 real 0m23.384s 00:30:11.102 user 1m59.486s 00:30:11.102 sys 0m11.950s 00:30:11.102 13:48:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:11.102 13:48:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:11.102 ************************************ 00:30:11.102 END TEST fio_dif_rand_params 00:30:11.102 ************************************ 00:30:11.102 13:48:23 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:11.102 13:48:23 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:11.102 13:48:23 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:11.102 13:48:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:11.102 ************************************ 00:30:11.102 START TEST fio_dif_digest 00:30:11.102 ************************************ 00:30:11.102 13:48:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:30:11.102 13:48:23 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:11.102 13:48:23 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:11.102 13:48:23 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:11.102 13:48:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:11.102 13:48:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:11.102 13:48:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:11.102 13:48:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:11.102 13:48:23 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:11.102 13:48:23 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:11.102 13:48:23 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:11.102 13:48:23 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:11.102 13:48:23 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:11.102 13:48:23 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:11.102 13:48:23 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:11.102 13:48:23 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:11.102 13:48:23 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:11.102 bdev_null0 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:11.102 [2024-05-15 13:48:24.037068] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:11.102 { 00:30:11.102 "params": { 00:30:11.102 "name": "Nvme$subsystem", 00:30:11.102 "trtype": "$TEST_TRANSPORT", 00:30:11.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:11.102 "adrfam": "ipv4", 00:30:11.102 "trsvcid": "$NVMF_PORT", 00:30:11.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:11.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:11.102 "hdgst": ${hdgst:-false}, 00:30:11.102 "ddgst": ${ddgst:-false} 00:30:11.102 }, 00:30:11.102 "method": "bdev_nvme_attach_controller" 00:30:11.102 } 00:30:11.102 EOF 00:30:11.102 )") 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:11.102 "params": { 00:30:11.102 "name": "Nvme0", 00:30:11.102 "trtype": "tcp", 00:30:11.102 "traddr": "10.0.0.2", 00:30:11.102 "adrfam": "ipv4", 00:30:11.102 "trsvcid": "4420", 00:30:11.102 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:11.102 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:11.102 "hdgst": true, 00:30:11.102 "ddgst": true 00:30:11.102 }, 00:30:11.102 "method": "bdev_nvme_attach_controller" 00:30:11.102 }' 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:11.102 13:48:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:11.359 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:11.359 ... 00:30:11.359 fio-3.35 00:30:11.359 Starting 3 threads 00:30:23.673 00:30:23.673 filename0: (groupid=0, jobs=1): err= 0: pid=99004: Wed May 15 13:48:34 2024 00:30:23.673 read: IOPS=245, BW=30.7MiB/s (32.2MB/s)(307MiB/10003msec) 00:30:23.673 slat (usec): min=6, max=290, avg=11.23, stdev= 7.21 00:30:23.673 clat (usec): min=10168, max=13334, avg=12184.36, stdev=334.49 00:30:23.673 lat (usec): min=10176, max=13358, avg=12195.59, stdev=334.79 00:30:23.673 clat percentiles (usec): 00:30:23.673 | 1.00th=[11338], 5.00th=[11600], 10.00th=[11731], 20.00th=[11863], 00:30:23.673 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12256], 00:30:23.673 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12649], 95.00th=[12649], 00:30:23.673 | 99.00th=[12780], 99.50th=[12780], 99.90th=[13304], 99.95th=[13304], 00:30:23.673 | 99.99th=[13304] 00:30:23.673 bw ( KiB/s): min=30720, max=32256, per=33.31%, avg=31414.35, stdev=492.79, samples=20 00:30:23.673 iops : min= 240, max= 252, avg=245.40, stdev= 3.84, samples=20 00:30:23.673 lat (msec) : 20=100.00% 00:30:23.674 cpu : usr=88.51%, sys=10.67%, ctx=114, majf=0, minf=9 00:30:23.674 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:23.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:23.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:23.674 issued rwts: total=2457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:23.674 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:23.674 filename0: (groupid=0, jobs=1): err= 0: pid=99005: Wed May 15 13:48:34 2024 00:30:23.674 read: IOPS=245, BW=30.7MiB/s (32.2MB/s)(307MiB/10002msec) 00:30:23.674 slat (nsec): min=6567, max=41052, avg=10955.05, stdev=4332.87 00:30:23.674 clat (usec): min=11084, max=12864, avg=12184.49, stdev=326.31 00:30:23.674 lat (usec): min=11091, max=12887, avg=12195.44, stdev=326.80 00:30:23.674 clat percentiles (usec): 00:30:23.674 | 1.00th=[11338], 5.00th=[11600], 10.00th=[11731], 20.00th=[11863], 00:30:23.674 | 30.00th=[12125], 40.00th=[12125], 50.00th=[12256], 60.00th=[12256], 00:30:23.674 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12649], 95.00th=[12649], 00:30:23.674 | 99.00th=[12780], 99.50th=[12780], 99.90th=[12911], 99.95th=[12911], 00:30:23.674 | 99.99th=[12911] 00:30:23.674 bw ( KiB/s): min=30720, max=32256, per=33.34%, avg=31447.58, stdev=477.13, samples=19 00:30:23.674 iops : min= 240, max= 252, avg=245.68, stdev= 3.73, samples=19 00:30:23.674 lat (msec) : 20=100.00% 00:30:23.674 cpu : usr=88.64%, sys=10.60%, ctx=94, majf=0, minf=0 00:30:23.674 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:23.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:23.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:23.674 issued rwts: total=2457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:23.674 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:23.674 filename0: (groupid=0, jobs=1): err= 0: pid=99006: Wed May 15 13:48:34 2024 00:30:23.674 read: IOPS=245, BW=30.7MiB/s (32.2MB/s)(307MiB/10001msec) 00:30:23.674 slat (nsec): min=6638, max=64461, avg=11091.34, stdev=4794.17 00:30:23.674 clat (usec): min=8662, max=14359, avg=12182.56, stdev=357.41 00:30:23.674 lat (usec): min=8670, max=14387, avg=12193.65, stdev=357.84 00:30:23.674 clat percentiles (usec): 00:30:23.674 | 1.00th=[11338], 5.00th=[11600], 10.00th=[11731], 20.00th=[11863], 00:30:23.674 | 30.00th=[12125], 40.00th=[12125], 50.00th=[12256], 60.00th=[12256], 00:30:23.674 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12649], 95.00th=[12649], 00:30:23.674 | 99.00th=[12780], 99.50th=[12780], 99.90th=[14353], 99.95th=[14353], 00:30:23.674 | 99.99th=[14353] 00:30:23.674 bw ( KiB/s): min=30720, max=32256, per=33.30%, avg=31407.16, stdev=505.22, samples=19 00:30:23.674 iops : min= 240, max= 252, avg=245.37, stdev= 3.95, samples=19 00:30:23.674 lat (msec) : 10=0.12%, 20=99.88% 00:30:23.674 cpu : usr=88.82%, sys=10.41%, ctx=162, majf=0, minf=9 00:30:23.674 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:23.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:23.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:23.674 issued rwts: total=2457,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:23.674 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:23.674 00:30:23.674 Run status group 0 (all jobs): 00:30:23.674 READ: bw=92.1MiB/s (96.6MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=921MiB (966MB), run=10001-10003msec 00:30:23.674 13:48:34 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:23.674 13:48:34 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:23.674 13:48:34 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:23.674 13:48:34 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:23.674 13:48:34 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:23.674 13:48:34 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:23.674 13:48:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.674 13:48:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:23.674 13:48:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.674 13:48:34 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:23.674 13:48:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.674 13:48:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:23.674 13:48:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.674 00:30:23.674 real 0m10.910s 00:30:23.674 user 0m27.146s 00:30:23.674 sys 0m3.444s 00:30:23.674 13:48:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:23.674 ************************************ 00:30:23.674 END TEST fio_dif_digest 00:30:23.674 ************************************ 00:30:23.674 13:48:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:23.674 13:48:34 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:23.674 13:48:34 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:23.674 13:48:34 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:23.674 13:48:34 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:30:23.674 13:48:34 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:23.674 13:48:34 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:30:23.674 13:48:34 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:23.674 13:48:34 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:23.674 rmmod nvme_tcp 00:30:23.674 rmmod nvme_fabrics 00:30:23.674 13:48:35 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:23.674 13:48:35 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:30:23.674 13:48:35 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:30:23.674 13:48:35 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 98269 ']' 00:30:23.674 13:48:35 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 98269 00:30:23.674 13:48:35 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 98269 ']' 00:30:23.674 13:48:35 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 98269 00:30:23.674 13:48:35 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:30:23.674 13:48:35 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:23.674 13:48:35 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 98269 00:30:23.674 13:48:35 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:23.674 killing process with pid 98269 00:30:23.674 13:48:35 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:23.674 13:48:35 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 98269' 00:30:23.674 13:48:35 nvmf_dif -- common/autotest_common.sh@965 -- # kill 98269 00:30:23.674 [2024-05-15 13:48:35.057613] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:23.674 13:48:35 nvmf_dif -- common/autotest_common.sh@970 -- # wait 98269 00:30:23.674 13:48:35 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:23.674 13:48:35 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:23.674 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:23.674 Waiting for block devices as requested 00:30:23.674 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:23.674 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:23.674 13:48:35 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:23.674 13:48:35 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:23.674 13:48:35 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:23.674 13:48:35 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:23.674 13:48:35 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.674 13:48:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:23.674 13:48:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.674 13:48:35 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:23.674 00:30:23.674 real 0m59.545s 00:30:23.674 user 3m41.788s 00:30:23.674 sys 0m25.030s 00:30:23.674 13:48:35 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:23.674 13:48:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:23.674 ************************************ 00:30:23.674 END TEST nvmf_dif 00:30:23.674 ************************************ 00:30:23.674 13:48:35 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:23.674 13:48:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:23.674 13:48:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:23.674 13:48:35 -- common/autotest_common.sh@10 -- # set +x 00:30:23.674 ************************************ 00:30:23.674 START TEST nvmf_abort_qd_sizes 00:30:23.674 ************************************ 00:30:23.674 13:48:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:23.674 * Looking for test storage... 00:30:23.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:23.674 13:48:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:23.674 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:30:23.674 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:23.674 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:23.674 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:23.674 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:23.674 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:23.674 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:23.674 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:23.674 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:23.674 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:23.675 Cannot find device "nvmf_tgt_br" 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:23.675 Cannot find device "nvmf_tgt_br2" 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:23.675 Cannot find device "nvmf_tgt_br" 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:23.675 Cannot find device "nvmf_tgt_br2" 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:23.675 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:23.675 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:23.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:23.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:30:23.675 00:30:23.675 --- 10.0.0.2 ping statistics --- 00:30:23.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.675 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:23.675 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:23.675 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:30:23.675 00:30:23.675 --- 10.0.0.3 ping statistics --- 00:30:23.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.675 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:30:23.675 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:23.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:23.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:30:23.675 00:30:23.675 --- 10.0.0.1 ping statistics --- 00:30:23.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.676 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:30:23.676 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:23.676 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:30:23.676 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:23.676 13:48:36 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:24.243 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:24.243 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:24.502 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:24.502 13:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:24.502 13:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:24.502 13:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:24.502 13:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:24.502 13:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:24.502 13:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:24.502 13:48:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:24.502 13:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:24.502 13:48:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:24.502 13:48:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:24.502 13:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=99594 00:30:24.502 13:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 99594 00:30:24.502 13:48:37 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:24.502 13:48:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 99594 ']' 00:30:24.502 13:48:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.502 13:48:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:24.502 13:48:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.502 13:48:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:24.502 13:48:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:24.502 [2024-05-15 13:48:37.534383] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:30:24.502 [2024-05-15 13:48:37.534697] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:24.760 [2024-05-15 13:48:37.665942] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:24.760 [2024-05-15 13:48:37.686593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:24.760 [2024-05-15 13:48:37.746293] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:24.760 [2024-05-15 13:48:37.746561] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:24.760 [2024-05-15 13:48:37.746756] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:24.760 [2024-05-15 13:48:37.746926] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:24.760 [2024-05-15 13:48:37.746982] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:24.760 [2024-05-15 13:48:37.747269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.760 [2024-05-15 13:48:37.747424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:24.760 [2024-05-15 13:48:37.747511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.760 [2024-05-15 13:48:37.747512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:30:25.737 13:48:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:30:25.738 13:48:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:30:25.738 13:48:38 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:25.738 13:48:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:25.738 13:48:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:25.738 13:48:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:25.738 ************************************ 00:30:25.738 START TEST spdk_target_abort 00:30:25.738 ************************************ 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:25.738 spdk_targetn1 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:25.738 [2024-05-15 13:48:38.729459] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:25.738 [2024-05-15 13:48:38.761410] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:25.738 [2024-05-15 13:48:38.761810] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:25.738 13:48:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:29.107 Initializing NVMe Controllers 00:30:29.107 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:29.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:29.107 Initialization complete. Launching workers. 00:30:29.107 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12672, failed: 0 00:30:29.107 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 472, failed to submit 12200 00:30:29.107 success 277, unsuccess 195, failed 0 00:30:29.107 13:48:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:29.107 13:48:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:33.294 Initializing NVMe Controllers 00:30:33.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:33.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:33.294 Initialization complete. Launching workers. 00:30:33.294 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 2808, failed: 0 00:30:33.294 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 468, failed to submit 2340 00:30:33.294 success 192, unsuccess 276, failed 0 00:30:33.294 13:48:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:33.294 13:48:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:35.824 Initializing NVMe Controllers 00:30:35.824 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:35.824 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:35.824 Initialization complete. Launching workers. 00:30:35.824 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 26250, failed: 0 00:30:35.824 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1104, failed to submit 25146 00:30:35.824 success 131, unsuccess 973, failed 0 00:30:35.824 13:48:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:35.824 13:48:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.824 13:48:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:35.824 13:48:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.824 13:48:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:35.824 13:48:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.824 13:48:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:36.083 13:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:36.083 13:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 99594 00:30:36.083 13:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 99594 ']' 00:30:36.083 13:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 99594 00:30:36.083 13:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:30:36.083 13:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:36.083 13:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99594 00:30:36.083 13:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:36.083 13:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:36.083 13:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99594' 00:30:36.083 killing process with pid 99594 00:30:36.083 13:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 99594 00:30:36.083 [2024-05-15 13:48:49.123726] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:36.083 13:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 99594 00:30:36.341 00:30:36.341 real 0m10.660s 00:30:36.341 user 0m42.984s 00:30:36.341 sys 0m2.851s 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:36.341 ************************************ 00:30:36.341 END TEST spdk_target_abort 00:30:36.341 ************************************ 00:30:36.341 13:48:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:36.341 13:48:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:36.341 13:48:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:36.341 13:48:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:36.341 ************************************ 00:30:36.341 START TEST kernel_target_abort 00:30:36.341 ************************************ 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:36.341 13:48:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:36.907 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:36.908 Waiting for block devices as requested 00:30:36.908 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:37.166 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:30:37.166 No valid GPT data, bailing 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:30:37.166 No valid GPT data, bailing 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:30:37.166 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:30:37.425 No valid GPT data, bailing 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:30:37.425 No valid GPT data, bailing 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 --hostid=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 -a 10.0.0.1 -t tcp -s 4420 00:30:37.425 00:30:37.425 Discovery Log Number of Records 2, Generation counter 2 00:30:37.425 =====Discovery Log Entry 0====== 00:30:37.425 trtype: tcp 00:30:37.425 adrfam: ipv4 00:30:37.425 subtype: current discovery subsystem 00:30:37.425 treq: not specified, sq flow control disable supported 00:30:37.425 portid: 1 00:30:37.425 trsvcid: 4420 00:30:37.425 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:37.425 traddr: 10.0.0.1 00:30:37.425 eflags: none 00:30:37.425 sectype: none 00:30:37.425 =====Discovery Log Entry 1====== 00:30:37.425 trtype: tcp 00:30:37.425 adrfam: ipv4 00:30:37.425 subtype: nvme subsystem 00:30:37.425 treq: not specified, sq flow control disable supported 00:30:37.425 portid: 1 00:30:37.425 trsvcid: 4420 00:30:37.425 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:37.425 traddr: 10.0.0.1 00:30:37.425 eflags: none 00:30:37.425 sectype: none 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:37.425 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:37.426 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:37.426 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:37.426 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:37.426 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:37.426 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:37.426 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:37.426 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:37.426 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:37.426 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:37.426 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:37.426 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:37.426 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:37.426 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:37.426 13:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:40.709 Initializing NVMe Controllers 00:30:40.709 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:40.709 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:40.709 Initialization complete. Launching workers. 00:30:40.709 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38302, failed: 0 00:30:40.709 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38302, failed to submit 0 00:30:40.709 success 0, unsuccess 38302, failed 0 00:30:40.709 13:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:40.709 13:48:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:43.995 Initializing NVMe Controllers 00:30:43.995 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:43.995 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:43.995 Initialization complete. Launching workers. 00:30:43.995 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79616, failed: 0 00:30:43.995 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35478, failed to submit 44138 00:30:43.995 success 0, unsuccess 35478, failed 0 00:30:43.995 13:48:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:43.995 13:48:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:47.279 Initializing NVMe Controllers 00:30:47.279 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:47.279 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:47.279 Initialization complete. Launching workers. 00:30:47.279 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 84354, failed: 0 00:30:47.279 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21090, failed to submit 63264 00:30:47.279 success 0, unsuccess 21090, failed 0 00:30:47.279 13:49:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:47.279 13:49:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:47.279 13:49:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:30:47.279 13:49:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:47.279 13:49:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:47.279 13:49:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:47.279 13:49:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:47.279 13:49:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:47.279 13:49:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:30:47.279 13:49:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:47.843 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:49.216 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:49.216 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:49.216 ************************************ 00:30:49.216 END TEST kernel_target_abort 00:30:49.216 ************************************ 00:30:49.216 00:30:49.216 real 0m12.757s 00:30:49.216 user 0m6.424s 00:30:49.216 sys 0m3.795s 00:30:49.216 13:49:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:49.216 13:49:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:49.216 13:49:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:49.216 13:49:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:49.216 13:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:49.216 13:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:30:49.217 13:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:49.217 13:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:30:49.217 13:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:49.217 13:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:49.217 rmmod nvme_tcp 00:30:49.217 rmmod nvme_fabrics 00:30:49.217 13:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:49.217 13:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:30:49.217 13:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:30:49.217 13:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 99594 ']' 00:30:49.217 13:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 99594 00:30:49.217 13:49:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 99594 ']' 00:30:49.217 13:49:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 99594 00:30:49.217 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (99594) - No such process 00:30:49.217 13:49:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 99594 is not found' 00:30:49.217 Process with pid 99594 is not found 00:30:49.217 13:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:49.217 13:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:49.782 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:49.782 Waiting for block devices as requested 00:30:49.782 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:50.040 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:50.040 13:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:50.040 13:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:50.040 13:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:50.040 13:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:50.040 13:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.040 13:49:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:50.040 13:49:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.040 13:49:03 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:50.040 00:30:50.040 real 0m27.043s 00:30:50.040 user 0m50.653s 00:30:50.040 sys 0m8.306s 00:30:50.040 13:49:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:50.040 ************************************ 00:30:50.040 END TEST nvmf_abort_qd_sizes 00:30:50.040 ************************************ 00:30:50.040 13:49:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:50.040 13:49:03 -- spdk/autotest.sh@291 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:30:50.040 13:49:03 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:50.040 13:49:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:50.040 13:49:03 -- common/autotest_common.sh@10 -- # set +x 00:30:50.040 ************************************ 00:30:50.040 START TEST keyring_file 00:30:50.040 ************************************ 00:30:50.040 13:49:03 keyring_file -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:30:50.299 * Looking for test storage... 00:30:50.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:30:50.299 13:49:03 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:30:50.299 13:49:03 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:50.299 13:49:03 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:30:50.299 13:49:03 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:50.299 13:49:03 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:50.299 13:49:03 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:50.299 13:49:03 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:50.299 13:49:03 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:50.299 13:49:03 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:50.299 13:49:03 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:50.299 13:49:03 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:50.299 13:49:03 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:50.299 13:49:03 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:50.299 13:49:03 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:30:50.299 13:49:03 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=f6a53dc4-3b2f-458a-99e2-288ecdb045d4 00:30:50.299 13:49:03 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:50.299 13:49:03 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:50.299 13:49:03 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:50.299 13:49:03 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:50.299 13:49:03 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:50.299 13:49:03 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:50.299 13:49:03 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:50.299 13:49:03 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:50.299 13:49:03 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.300 13:49:03 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.300 13:49:03 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.300 13:49:03 keyring_file -- paths/export.sh@5 -- # export PATH 00:30:50.300 13:49:03 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.300 13:49:03 keyring_file -- nvmf/common.sh@47 -- # : 0 00:30:50.300 13:49:03 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:50.300 13:49:03 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:50.300 13:49:03 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:50.300 13:49:03 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:50.300 13:49:03 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:50.300 13:49:03 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:50.300 13:49:03 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:50.300 13:49:03 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:50.300 13:49:03 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:50.300 13:49:03 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:50.300 13:49:03 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:50.300 13:49:03 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:50.300 13:49:03 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:50.300 13:49:03 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:50.300 13:49:03 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:50.300 13:49:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:50.300 13:49:03 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:50.300 13:49:03 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:50.300 13:49:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:50.300 13:49:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:50.300 13:49:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.d82HET6nDi 00:30:50.300 13:49:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:50.300 13:49:03 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:50.300 13:49:03 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:50.300 13:49:03 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:50.300 13:49:03 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:50.300 13:49:03 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:50.300 13:49:03 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:50.300 13:49:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.d82HET6nDi 00:30:50.300 13:49:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.d82HET6nDi 00:30:50.300 13:49:03 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.d82HET6nDi 00:30:50.300 13:49:03 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:50.300 13:49:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:50.300 13:49:03 keyring_file -- keyring/common.sh@17 -- # name=key1 00:30:50.300 13:49:03 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:50.300 13:49:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:50.300 13:49:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:50.300 13:49:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.MLrz4XxXjg 00:30:50.300 13:49:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:50.300 13:49:03 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:50.300 13:49:03 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:50.300 13:49:03 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:50.300 13:49:03 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:50.300 13:49:03 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:50.300 13:49:03 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:50.300 13:49:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.MLrz4XxXjg 00:30:50.300 13:49:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.MLrz4XxXjg 00:30:50.300 13:49:03 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.MLrz4XxXjg 00:30:50.300 13:49:03 keyring_file -- keyring/file.sh@30 -- # tgtpid=100463 00:30:50.300 13:49:03 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:50.300 13:49:03 keyring_file -- keyring/file.sh@32 -- # waitforlisten 100463 00:30:50.300 13:49:03 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 100463 ']' 00:30:50.300 13:49:03 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.300 13:49:03 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:50.300 13:49:03 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.300 13:49:03 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:50.300 13:49:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:50.300 [2024-05-15 13:49:03.394677] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:30:50.300 [2024-05-15 13:49:03.394993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100463 ] 00:30:50.557 [2024-05-15 13:49:03.521743] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:50.557 [2024-05-15 13:49:03.540371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.557 [2024-05-15 13:49:03.620986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:30:50.814 13:49:03 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:50.814 [2024-05-15 13:49:03.826026] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.814 null0 00:30:50.814 [2024-05-15 13:49:03.857967] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:50.814 [2024-05-15 13:49:03.858209] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:50.814 [2024-05-15 13:49:03.858521] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:50.814 [2024-05-15 13:49:03.865994] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.814 13:49:03 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:50.814 [2024-05-15 13:49:03.881994] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:30:50.814 request: 00:30:50.814 { 00:30:50.814 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:50.814 "secure_channel": false, 00:30:50.814 "listen_address": { 00:30:50.814 "trtype": "tcp", 00:30:50.814 "traddr": "127.0.0.1", 00:30:50.814 "trsvcid": "4420" 00:30:50.814 }, 00:30:50.814 "method": "nvmf_subsystem_add_listener", 00:30:50.814 "req_id": 1 00:30:50.814 } 00:30:50.814 Got JSON-RPC error response 00:30:50.814 response: 00:30:50.814 { 00:30:50.814 "code": -32602, 00:30:50.814 "message": "Invalid parameters" 00:30:50.814 } 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:50.814 13:49:03 keyring_file -- keyring/file.sh@46 -- # bperfpid=100468 00:30:50.814 13:49:03 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:50.814 13:49:03 keyring_file -- keyring/file.sh@48 -- # waitforlisten 100468 /var/tmp/bperf.sock 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 100468 ']' 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:50.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:50.814 13:49:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:51.072 [2024-05-15 13:49:03.949887] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:30:51.072 [2024-05-15 13:49:03.950253] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100468 ] 00:30:51.072 [2024-05-15 13:49:04.081473] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:51.072 [2024-05-15 13:49:04.099967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.073 [2024-05-15 13:49:04.158142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.329 13:49:04 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:51.329 13:49:04 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:30:51.329 13:49:04 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.d82HET6nDi 00:30:51.329 13:49:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.d82HET6nDi 00:30:51.587 13:49:04 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.MLrz4XxXjg 00:30:51.587 13:49:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.MLrz4XxXjg 00:30:51.845 13:49:04 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:30:51.845 13:49:04 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:30:51.845 13:49:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:51.845 13:49:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:51.845 13:49:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:52.102 13:49:05 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.d82HET6nDi == \/\t\m\p\/\t\m\p\.\d\8\2\H\E\T\6\n\D\i ]] 00:30:52.102 13:49:05 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:30:52.102 13:49:05 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:30:52.102 13:49:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:52.102 13:49:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:52.102 13:49:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:52.360 13:49:05 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.MLrz4XxXjg == \/\t\m\p\/\t\m\p\.\M\L\r\z\4\X\x\X\j\g ]] 00:30:52.360 13:49:05 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:30:52.360 13:49:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:52.360 13:49:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:52.360 13:49:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:52.360 13:49:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:52.360 13:49:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:52.617 13:49:05 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:30:52.617 13:49:05 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:30:52.617 13:49:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:52.617 13:49:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:52.618 13:49:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:52.618 13:49:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:52.618 13:49:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:52.876 13:49:05 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:52.876 13:49:05 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:52.876 13:49:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:53.442 [2024-05-15 13:49:06.246701] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:53.442 nvme0n1 00:30:53.442 13:49:06 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:30:53.442 13:49:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:53.442 13:49:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:53.442 13:49:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:53.442 13:49:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:53.442 13:49:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:53.701 13:49:06 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:53.701 13:49:06 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:30:53.701 13:49:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:53.701 13:49:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:53.701 13:49:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:53.701 13:49:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:53.701 13:49:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:53.958 13:49:06 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:30:53.958 13:49:06 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:53.958 Running I/O for 1 seconds... 00:30:55.334 00:30:55.334 Latency(us) 00:30:55.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.334 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:55.334 nvme0n1 : 1.00 13254.51 51.78 0.00 0.00 9630.83 4743.56 22219.82 00:30:55.334 =================================================================================================================== 00:30:55.334 Total : 13254.51 51.78 0.00 0.00 9630.83 4743.56 22219.82 00:30:55.334 0 00:30:55.334 13:49:08 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:55.334 13:49:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:55.334 13:49:08 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:30:55.334 13:49:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:55.334 13:49:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:55.334 13:49:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:55.334 13:49:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:55.334 13:49:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:55.901 13:49:08 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:30:55.901 13:49:08 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:30:55.901 13:49:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:55.901 13:49:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:55.901 13:49:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:55.901 13:49:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:55.901 13:49:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:55.901 13:49:08 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:30:55.901 13:49:08 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:55.901 13:49:08 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:55.901 13:49:08 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:55.901 13:49:08 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:55.901 13:49:08 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:55.901 13:49:08 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:55.901 13:49:08 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:55.901 13:49:08 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:55.901 13:49:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:56.161 [2024-05-15 13:49:09.173019] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:56.161 [2024-05-15 13:49:09.173759] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b7780 (107): Transport endpoint is not connected 00:30:56.161 [2024-05-15 13:49:09.174751] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b7780 (9): Bad file descriptor 00:30:56.161 [2024-05-15 13:49:09.175746] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:56.161 [2024-05-15 13:49:09.175882] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:56.161 [2024-05-15 13:49:09.176000] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:56.161 request: 00:30:56.161 { 00:30:56.161 "name": "nvme0", 00:30:56.161 "trtype": "tcp", 00:30:56.161 "traddr": "127.0.0.1", 00:30:56.161 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:56.161 "adrfam": "ipv4", 00:30:56.161 "trsvcid": "4420", 00:30:56.161 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:56.161 "psk": "key1", 00:30:56.161 "method": "bdev_nvme_attach_controller", 00:30:56.161 "req_id": 1 00:30:56.161 } 00:30:56.161 Got JSON-RPC error response 00:30:56.161 response: 00:30:56.161 { 00:30:56.161 "code": -32602, 00:30:56.161 "message": "Invalid parameters" 00:30:56.161 } 00:30:56.161 13:49:09 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:56.161 13:49:09 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:56.161 13:49:09 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:56.161 13:49:09 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:56.161 13:49:09 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:30:56.161 13:49:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:56.161 13:49:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:56.161 13:49:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:56.161 13:49:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:56.161 13:49:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:56.420 13:49:09 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:30:56.420 13:49:09 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:30:56.420 13:49:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:56.420 13:49:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:56.420 13:49:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:56.420 13:49:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:56.420 13:49:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:56.678 13:49:09 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:56.678 13:49:09 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:30:56.678 13:49:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:56.936 13:49:09 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:30:56.936 13:49:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:57.193 13:49:10 keyring_file -- keyring/file.sh@77 -- # jq length 00:30:57.193 13:49:10 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:30:57.193 13:49:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:57.452 13:49:10 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:30:57.452 13:49:10 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.d82HET6nDi 00:30:57.452 13:49:10 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.d82HET6nDi 00:30:57.452 13:49:10 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:57.452 13:49:10 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.d82HET6nDi 00:30:57.452 13:49:10 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:57.452 13:49:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:57.452 13:49:10 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:57.452 13:49:10 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:57.452 13:49:10 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.d82HET6nDi 00:30:57.452 13:49:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.d82HET6nDi 00:30:57.711 [2024-05-15 13:49:10.690359] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.d82HET6nDi': 0100660 00:30:57.712 [2024-05-15 13:49:10.690682] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:57.712 request: 00:30:57.712 { 00:30:57.712 "name": "key0", 00:30:57.712 "path": "/tmp/tmp.d82HET6nDi", 00:30:57.712 "method": "keyring_file_add_key", 00:30:57.712 "req_id": 1 00:30:57.712 } 00:30:57.712 Got JSON-RPC error response 00:30:57.712 response: 00:30:57.712 { 00:30:57.712 "code": -1, 00:30:57.712 "message": "Operation not permitted" 00:30:57.712 } 00:30:57.712 13:49:10 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:57.712 13:49:10 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:57.712 13:49:10 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:57.712 13:49:10 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:57.712 13:49:10 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.d82HET6nDi 00:30:57.712 13:49:10 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.d82HET6nDi 00:30:57.712 13:49:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.d82HET6nDi 00:30:57.970 13:49:11 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.d82HET6nDi 00:30:57.970 13:49:11 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:30:57.970 13:49:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:57.970 13:49:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:57.970 13:49:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:57.970 13:49:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:57.970 13:49:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:58.228 13:49:11 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:30:58.228 13:49:11 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:58.228 13:49:11 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:58.228 13:49:11 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:58.228 13:49:11 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:58.228 13:49:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:58.229 13:49:11 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:58.229 13:49:11 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:58.229 13:49:11 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:58.229 13:49:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:58.487 [2024-05-15 13:49:11.506505] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.d82HET6nDi': No such file or directory 00:30:58.487 [2024-05-15 13:49:11.506829] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:58.487 [2024-05-15 13:49:11.506953] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:58.487 [2024-05-15 13:49:11.507030] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:58.487 [2024-05-15 13:49:11.507068] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:58.487 request: 00:30:58.487 { 00:30:58.487 "name": "nvme0", 00:30:58.487 "trtype": "tcp", 00:30:58.487 "traddr": "127.0.0.1", 00:30:58.487 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:58.487 "adrfam": "ipv4", 00:30:58.487 "trsvcid": "4420", 00:30:58.487 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:58.487 "psk": "key0", 00:30:58.487 "method": "bdev_nvme_attach_controller", 00:30:58.487 "req_id": 1 00:30:58.487 } 00:30:58.487 Got JSON-RPC error response 00:30:58.487 response: 00:30:58.487 { 00:30:58.487 "code": -19, 00:30:58.487 "message": "No such device" 00:30:58.487 } 00:30:58.487 13:49:11 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:58.487 13:49:11 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:58.487 13:49:11 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:58.487 13:49:11 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:58.487 13:49:11 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:30:58.487 13:49:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:58.746 13:49:11 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:58.746 13:49:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:58.746 13:49:11 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:58.746 13:49:11 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:58.746 13:49:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:58.746 13:49:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:58.746 13:49:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.H3CYqJ5Y4W 00:30:58.746 13:49:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:58.746 13:49:11 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:58.746 13:49:11 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:58.746 13:49:11 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:58.746 13:49:11 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:58.746 13:49:11 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:58.746 13:49:11 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:59.004 13:49:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.H3CYqJ5Y4W 00:30:59.004 13:49:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.H3CYqJ5Y4W 00:30:59.004 13:49:11 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.H3CYqJ5Y4W 00:30:59.004 13:49:11 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.H3CYqJ5Y4W 00:30:59.004 13:49:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.H3CYqJ5Y4W 00:30:59.263 13:49:12 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:59.263 13:49:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:59.522 nvme0n1 00:30:59.522 13:49:12 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:30:59.522 13:49:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:59.522 13:49:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:59.522 13:49:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:59.522 13:49:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:59.522 13:49:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:59.780 13:49:12 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:30:59.780 13:49:12 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:30:59.780 13:49:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:00.039 13:49:13 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:31:00.039 13:49:13 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:31:00.040 13:49:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:00.040 13:49:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:00.040 13:49:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:00.299 13:49:13 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:31:00.299 13:49:13 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:31:00.299 13:49:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:00.299 13:49:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:00.299 13:49:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:00.299 13:49:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:00.299 13:49:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:00.863 13:49:13 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:31:00.863 13:49:13 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:00.863 13:49:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:00.863 13:49:13 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:31:00.863 13:49:13 keyring_file -- keyring/file.sh@104 -- # jq length 00:31:00.863 13:49:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:01.121 13:49:14 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:31:01.121 13:49:14 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.H3CYqJ5Y4W 00:31:01.121 13:49:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.H3CYqJ5Y4W 00:31:01.378 13:49:14 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.MLrz4XxXjg 00:31:01.378 13:49:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.MLrz4XxXjg 00:31:01.635 13:49:14 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:01.635 13:49:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:02.201 nvme0n1 00:31:02.201 13:49:15 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:31:02.201 13:49:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:31:02.460 13:49:15 keyring_file -- keyring/file.sh@112 -- # config='{ 00:31:02.460 "subsystems": [ 00:31:02.460 { 00:31:02.460 "subsystem": "keyring", 00:31:02.460 "config": [ 00:31:02.460 { 00:31:02.460 "method": "keyring_file_add_key", 00:31:02.460 "params": { 00:31:02.460 "name": "key0", 00:31:02.460 "path": "/tmp/tmp.H3CYqJ5Y4W" 00:31:02.460 } 00:31:02.460 }, 00:31:02.460 { 00:31:02.460 "method": "keyring_file_add_key", 00:31:02.460 "params": { 00:31:02.460 "name": "key1", 00:31:02.460 "path": "/tmp/tmp.MLrz4XxXjg" 00:31:02.460 } 00:31:02.460 } 00:31:02.460 ] 00:31:02.460 }, 00:31:02.460 { 00:31:02.460 "subsystem": "iobuf", 00:31:02.460 "config": [ 00:31:02.460 { 00:31:02.460 "method": "iobuf_set_options", 00:31:02.460 "params": { 00:31:02.460 "small_pool_count": 8192, 00:31:02.460 "large_pool_count": 1024, 00:31:02.460 "small_bufsize": 8192, 00:31:02.460 "large_bufsize": 135168 00:31:02.460 } 00:31:02.460 } 00:31:02.460 ] 00:31:02.460 }, 00:31:02.460 { 00:31:02.460 "subsystem": "sock", 00:31:02.460 "config": [ 00:31:02.460 { 00:31:02.460 "method": "sock_impl_set_options", 00:31:02.460 "params": { 00:31:02.460 "impl_name": "uring", 00:31:02.460 "recv_buf_size": 2097152, 00:31:02.460 "send_buf_size": 2097152, 00:31:02.460 "enable_recv_pipe": true, 00:31:02.460 "enable_quickack": false, 00:31:02.460 "enable_placement_id": 0, 00:31:02.460 "enable_zerocopy_send_server": false, 00:31:02.460 "enable_zerocopy_send_client": false, 00:31:02.460 "zerocopy_threshold": 0, 00:31:02.460 "tls_version": 0, 00:31:02.460 "enable_ktls": false 00:31:02.460 } 00:31:02.460 }, 00:31:02.460 { 00:31:02.460 "method": "sock_impl_set_options", 00:31:02.460 "params": { 00:31:02.460 "impl_name": "posix", 00:31:02.460 "recv_buf_size": 2097152, 00:31:02.460 "send_buf_size": 2097152, 00:31:02.460 "enable_recv_pipe": true, 00:31:02.460 "enable_quickack": false, 00:31:02.460 "enable_placement_id": 0, 00:31:02.460 "enable_zerocopy_send_server": true, 00:31:02.460 "enable_zerocopy_send_client": false, 00:31:02.460 "zerocopy_threshold": 0, 00:31:02.460 "tls_version": 0, 00:31:02.460 "enable_ktls": false 00:31:02.460 } 00:31:02.460 }, 00:31:02.460 { 00:31:02.460 "method": "sock_impl_set_options", 00:31:02.460 "params": { 00:31:02.460 "impl_name": "ssl", 00:31:02.460 "recv_buf_size": 4096, 00:31:02.460 "send_buf_size": 4096, 00:31:02.460 "enable_recv_pipe": true, 00:31:02.460 "enable_quickack": false, 00:31:02.460 "enable_placement_id": 0, 00:31:02.460 "enable_zerocopy_send_server": true, 00:31:02.460 "enable_zerocopy_send_client": false, 00:31:02.460 "zerocopy_threshold": 0, 00:31:02.460 "tls_version": 0, 00:31:02.460 "enable_ktls": false 00:31:02.460 } 00:31:02.460 } 00:31:02.460 ] 00:31:02.460 }, 00:31:02.460 { 00:31:02.460 "subsystem": "vmd", 00:31:02.460 "config": [] 00:31:02.460 }, 00:31:02.460 { 00:31:02.460 "subsystem": "accel", 00:31:02.460 "config": [ 00:31:02.460 { 00:31:02.460 "method": "accel_set_options", 00:31:02.460 "params": { 00:31:02.460 "small_cache_size": 128, 00:31:02.460 "large_cache_size": 16, 00:31:02.460 "task_count": 2048, 00:31:02.460 "sequence_count": 2048, 00:31:02.460 "buf_count": 2048 00:31:02.460 } 00:31:02.460 } 00:31:02.460 ] 00:31:02.460 }, 00:31:02.460 { 00:31:02.460 "subsystem": "bdev", 00:31:02.460 "config": [ 00:31:02.460 { 00:31:02.460 "method": "bdev_set_options", 00:31:02.460 "params": { 00:31:02.460 "bdev_io_pool_size": 65535, 00:31:02.460 "bdev_io_cache_size": 256, 00:31:02.460 "bdev_auto_examine": true, 00:31:02.460 "iobuf_small_cache_size": 128, 00:31:02.460 "iobuf_large_cache_size": 16 00:31:02.460 } 00:31:02.460 }, 00:31:02.460 { 00:31:02.460 "method": "bdev_raid_set_options", 00:31:02.460 "params": { 00:31:02.460 "process_window_size_kb": 1024 00:31:02.460 } 00:31:02.460 }, 00:31:02.460 { 00:31:02.460 "method": "bdev_iscsi_set_options", 00:31:02.460 "params": { 00:31:02.460 "timeout_sec": 30 00:31:02.460 } 00:31:02.460 }, 00:31:02.460 { 00:31:02.460 "method": "bdev_nvme_set_options", 00:31:02.460 "params": { 00:31:02.460 "action_on_timeout": "none", 00:31:02.460 "timeout_us": 0, 00:31:02.460 "timeout_admin_us": 0, 00:31:02.460 "keep_alive_timeout_ms": 10000, 00:31:02.460 "arbitration_burst": 0, 00:31:02.460 "low_priority_weight": 0, 00:31:02.460 "medium_priority_weight": 0, 00:31:02.460 "high_priority_weight": 0, 00:31:02.460 "nvme_adminq_poll_period_us": 10000, 00:31:02.460 "nvme_ioq_poll_period_us": 0, 00:31:02.460 "io_queue_requests": 512, 00:31:02.460 "delay_cmd_submit": true, 00:31:02.460 "transport_retry_count": 4, 00:31:02.460 "bdev_retry_count": 3, 00:31:02.460 "transport_ack_timeout": 0, 00:31:02.460 "ctrlr_loss_timeout_sec": 0, 00:31:02.460 "reconnect_delay_sec": 0, 00:31:02.460 "fast_io_fail_timeout_sec": 0, 00:31:02.460 "disable_auto_failback": false, 00:31:02.460 "generate_uuids": false, 00:31:02.460 "transport_tos": 0, 00:31:02.460 "nvme_error_stat": false, 00:31:02.460 "rdma_srq_size": 0, 00:31:02.461 "io_path_stat": false, 00:31:02.461 "allow_accel_sequence": false, 00:31:02.461 "rdma_max_cq_size": 0, 00:31:02.461 "rdma_cm_event_timeout_ms": 0, 00:31:02.461 "dhchap_digests": [ 00:31:02.461 "sha256", 00:31:02.461 "sha384", 00:31:02.461 "sha512" 00:31:02.461 ], 00:31:02.461 "dhchap_dhgroups": [ 00:31:02.461 "null", 00:31:02.461 "ffdhe2048", 00:31:02.461 "ffdhe3072", 00:31:02.461 "ffdhe4096", 00:31:02.461 "ffdhe6144", 00:31:02.461 "ffdhe8192" 00:31:02.461 ] 00:31:02.461 } 00:31:02.461 }, 00:31:02.461 { 00:31:02.461 "method": "bdev_nvme_attach_controller", 00:31:02.461 "params": { 00:31:02.461 "name": "nvme0", 00:31:02.461 "trtype": "TCP", 00:31:02.461 "adrfam": "IPv4", 00:31:02.461 "traddr": "127.0.0.1", 00:31:02.461 "trsvcid": "4420", 00:31:02.461 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:02.461 "prchk_reftag": false, 00:31:02.461 "prchk_guard": false, 00:31:02.461 "ctrlr_loss_timeout_sec": 0, 00:31:02.461 "reconnect_delay_sec": 0, 00:31:02.461 "fast_io_fail_timeout_sec": 0, 00:31:02.461 "psk": "key0", 00:31:02.461 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:02.461 "hdgst": false, 00:31:02.461 "ddgst": false 00:31:02.461 } 00:31:02.461 }, 00:31:02.461 { 00:31:02.461 "method": "bdev_nvme_set_hotplug", 00:31:02.461 "params": { 00:31:02.461 "period_us": 100000, 00:31:02.461 "enable": false 00:31:02.461 } 00:31:02.461 }, 00:31:02.461 { 00:31:02.461 "method": "bdev_wait_for_examine" 00:31:02.461 } 00:31:02.461 ] 00:31:02.461 }, 00:31:02.461 { 00:31:02.461 "subsystem": "nbd", 00:31:02.461 "config": [] 00:31:02.461 } 00:31:02.461 ] 00:31:02.461 }' 00:31:02.461 13:49:15 keyring_file -- keyring/file.sh@114 -- # killprocess 100468 00:31:02.461 13:49:15 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 100468 ']' 00:31:02.461 13:49:15 keyring_file -- common/autotest_common.sh@950 -- # kill -0 100468 00:31:02.461 13:49:15 keyring_file -- common/autotest_common.sh@951 -- # uname 00:31:02.461 13:49:15 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:02.461 13:49:15 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100468 00:31:02.461 killing process with pid 100468 00:31:02.461 Received shutdown signal, test time was about 1.000000 seconds 00:31:02.461 00:31:02.461 Latency(us) 00:31:02.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.461 =================================================================================================================== 00:31:02.461 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:02.461 13:49:15 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:02.461 13:49:15 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:02.461 13:49:15 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100468' 00:31:02.461 13:49:15 keyring_file -- common/autotest_common.sh@965 -- # kill 100468 00:31:02.461 13:49:15 keyring_file -- common/autotest_common.sh@970 -- # wait 100468 00:31:02.461 13:49:15 keyring_file -- keyring/file.sh@117 -- # bperfpid=100716 00:31:02.461 13:49:15 keyring_file -- keyring/file.sh@119 -- # waitforlisten 100716 /var/tmp/bperf.sock 00:31:02.461 13:49:15 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:31:02.461 13:49:15 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 100716 ']' 00:31:02.461 13:49:15 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:31:02.461 "subsystems": [ 00:31:02.461 { 00:31:02.461 "subsystem": "keyring", 00:31:02.461 "config": [ 00:31:02.461 { 00:31:02.461 "method": "keyring_file_add_key", 00:31:02.461 "params": { 00:31:02.461 "name": "key0", 00:31:02.461 "path": "/tmp/tmp.H3CYqJ5Y4W" 00:31:02.461 } 00:31:02.461 }, 00:31:02.461 { 00:31:02.461 "method": "keyring_file_add_key", 00:31:02.461 "params": { 00:31:02.461 "name": "key1", 00:31:02.461 "path": "/tmp/tmp.MLrz4XxXjg" 00:31:02.461 } 00:31:02.461 } 00:31:02.461 ] 00:31:02.461 }, 00:31:02.461 { 00:31:02.461 "subsystem": "iobuf", 00:31:02.461 "config": [ 00:31:02.461 { 00:31:02.461 "method": "iobuf_set_options", 00:31:02.461 "params": { 00:31:02.461 "small_pool_count": 8192, 00:31:02.461 "large_pool_count": 1024, 00:31:02.461 "small_bufsize": 8192, 00:31:02.461 "large_bufsize": 135168 00:31:02.461 } 00:31:02.461 } 00:31:02.461 ] 00:31:02.461 }, 00:31:02.461 { 00:31:02.461 "subsystem": "sock", 00:31:02.461 "config": [ 00:31:02.461 { 00:31:02.461 "method": "sock_impl_set_options", 00:31:02.461 "params": { 00:31:02.461 "impl_name": "uring", 00:31:02.461 "recv_buf_size": 2097152, 00:31:02.461 "send_buf_size": 2097152, 00:31:02.461 "enable_recv_pipe": true, 00:31:02.461 "enable_quickack": false, 00:31:02.461 "enable_placement_id": 0, 00:31:02.461 "enable_zerocopy_send_server": false, 00:31:02.461 "enable_zerocopy_send_client": false, 00:31:02.461 "zerocopy_threshold": 0, 00:31:02.461 "tls_version": 0, 00:31:02.461 "enable_ktls": false 00:31:02.461 } 00:31:02.461 }, 00:31:02.461 { 00:31:02.461 "method": "sock_impl_set_options", 00:31:02.461 "params": { 00:31:02.461 "impl_name": "posix", 00:31:02.461 "recv_buf_size": 2097152, 00:31:02.461 "send_buf_size": 2097152, 00:31:02.461 "enable_recv_pipe": true, 00:31:02.461 "enable_quickack": false, 00:31:02.461 "enable_placement_id": 0, 00:31:02.461 "enable_zerocopy_send_server": true, 00:31:02.461 "enable_zerocopy_send_client": false, 00:31:02.461 "zerocopy_threshold": 0, 00:31:02.461 "tls_version": 0, 00:31:02.461 "enable_ktls": false 00:31:02.461 } 00:31:02.461 }, 00:31:02.461 { 00:31:02.461 "method": "sock_impl_set_options", 00:31:02.461 "params": { 00:31:02.461 "impl_name": "ssl", 00:31:02.461 "recv_buf_size": 4096, 00:31:02.461 "send_buf_size": 4096, 00:31:02.461 "enable_recv_pipe": true, 00:31:02.461 "enable_quickack": false, 00:31:02.461 "enable_placement_id": 0, 00:31:02.461 "enable_zerocopy_send_server": true, 00:31:02.461 "enable_zerocopy_send_client": false, 00:31:02.461 "zerocopy_threshold": 0, 00:31:02.461 "tls_version": 0, 00:31:02.461 "enable_ktls": false 00:31:02.461 } 00:31:02.461 } 00:31:02.461 ] 00:31:02.461 }, 00:31:02.461 { 00:31:02.461 "subsystem": "vmd", 00:31:02.461 "config": [] 00:31:02.461 }, 00:31:02.461 { 00:31:02.461 "subsystem": "accel", 00:31:02.461 "config": [ 00:31:02.461 { 00:31:02.461 "method": "accel_set_options", 00:31:02.461 "params": { 00:31:02.461 "small_cache_size": 128, 00:31:02.461 "large_cache_size": 16, 00:31:02.461 "task_count": 2048, 00:31:02.461 "sequence_count": 2048, 00:31:02.461 "buf_count": 2048 00:31:02.461 } 00:31:02.461 } 00:31:02.461 ] 00:31:02.461 }, 00:31:02.461 { 00:31:02.461 "subsystem": "bdev", 00:31:02.461 "config": [ 00:31:02.461 { 00:31:02.461 "method": "bdev_set_options", 00:31:02.461 "params": { 00:31:02.461 "bdev_io_pool_size": 65535, 00:31:02.461 "bdev_io_cache_size": 256, 00:31:02.461 "bdev_auto_examine": true, 00:31:02.461 "iobuf_small_cache_size": 128, 00:31:02.461 "iobuf_large_cache_size": 16 00:31:02.461 } 00:31:02.461 }, 00:31:02.461 { 00:31:02.461 "method": "bdev_raid_set_options", 00:31:02.461 "params": { 00:31:02.461 "process_window_size_kb": 1024 00:31:02.461 } 00:31:02.461 }, 00:31:02.461 { 00:31:02.461 "method": "bdev_iscsi_set_options", 00:31:02.461 "params": { 00:31:02.461 "timeout_sec": 30 00:31:02.461 } 00:31:02.461 }, 00:31:02.461 { 00:31:02.461 "method": "bdev_nvme_set_options", 00:31:02.461 "params": { 00:31:02.461 "action_on_timeout": "none", 00:31:02.461 "timeout_us": 0, 00:31:02.461 "timeout_admin_us": 0, 00:31:02.461 "keep_alive_timeout_ms": 10000, 00:31:02.461 "arbitration_burst": 0, 00:31:02.461 "low_priority_weight": 0, 00:31:02.461 "medium_priority_weight": 0, 00:31:02.461 "high_priority_weight": 0, 00:31:02.461 "nvme_adminq_poll_period_us": 10000, 00:31:02.461 "nvme_ioq_poll_period_us": 0, 00:31:02.461 "io_queue_requests": 512, 00:31:02.461 "delay_cmd_submit": true, 00:31:02.461 "transport_retry_count": 4, 00:31:02.461 "bdev_retry_count": 3, 00:31:02.461 "transport_ack_timeout": 0, 00:31:02.461 "ctrlr_loss_timeout_sec": 0, 00:31:02.461 "reconnect_delay_sec": 0, 00:31:02.461 "fast_io_fail_timeout_sec": 0, 00:31:02.461 "disable_auto_failback": false, 00:31:02.461 "generate_uuids": false, 00:31:02.461 "transport_tos": 0, 00:31:02.461 "nvme_error_stat": false, 00:31:02.461 "rdma_srq_size": 0, 00:31:02.461 "io_path_stat": false, 00:31:02.461 "allow_accel_sequence": false, 00:31:02.461 "rdma_max_cq_size": 0, 00:31:02.461 "rdma_cm_event_timeout_ms": 0, 00:31:02.461 "dhchap_digests": [ 00:31:02.461 "sha256", 00:31:02.461 "sha384", 00:31:02.461 "sha512" 00:31:02.461 ], 00:31:02.461 "dhchap_dhgroups": [ 00:31:02.461 "null", 00:31:02.461 "ffdhe2048", 00:31:02.461 "ffdhe3072", 00:31:02.461 "ffdhe4096", 00:31:02.461 "ffdhe6144", 00:31:02.461 "ffdhe8192" 00:31:02.461 ] 00:31:02.461 } 00:31:02.461 }, 00:31:02.461 { 00:31:02.461 "method": "bdev_nvme_attach_controller", 00:31:02.461 "params": { 00:31:02.461 "name": "nvme0", 00:31:02.461 "trtype": "TCP", 00:31:02.461 "adrfam": "IPv4", 00:31:02.461 "traddr": "127.0.0.1", 00:31:02.461 "trsvcid": "4420", 00:31:02.461 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:02.461 "prchk_reftag": false, 00:31:02.461 "prchk_guard": false, 00:31:02.461 "ctrlr_loss_timeout_sec": 0, 00:31:02.461 "reconnect_delay_sec": 0, 00:31:02.461 "fast_io_fail_timeout_sec": 0, 00:31:02.461 "psk": "key0", 00:31:02.461 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:02.461 "hdgst": false, 00:31:02.461 "ddgst": false 00:31:02.461 } 00:31:02.461 }, 00:31:02.461 { 00:31:02.461 "method": "bdev_nvme_set_hotplug", 00:31:02.461 "params": { 00:31:02.461 "period_us": 100000, 00:31:02.462 "enable": false 00:31:02.462 } 00:31:02.462 }, 00:31:02.462 { 00:31:02.462 "method": "bdev_wait_for_examine" 00:31:02.462 } 00:31:02.462 ] 00:31:02.462 }, 00:31:02.462 { 00:31:02.462 "subsystem": "nbd", 00:31:02.462 "config": [] 00:31:02.462 } 00:31:02.462 ] 00:31:02.462 }' 00:31:02.462 13:49:15 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:02.462 13:49:15 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:02.462 13:49:15 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:02.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:02.720 13:49:15 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:02.720 13:49:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:02.720 [2024-05-15 13:49:15.607882] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:31:02.720 [2024-05-15 13:49:15.607982] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100716 ] 00:31:02.720 [2024-05-15 13:49:15.734543] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:02.720 [2024-05-15 13:49:15.753288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.720 [2024-05-15 13:49:15.806161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.978 [2024-05-15 13:49:15.971980] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:03.543 13:49:16 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:03.543 13:49:16 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:31:03.543 13:49:16 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:31:03.543 13:49:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:03.543 13:49:16 keyring_file -- keyring/file.sh@120 -- # jq length 00:31:03.801 13:49:16 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:31:03.801 13:49:16 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:31:03.801 13:49:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:03.801 13:49:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:03.801 13:49:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:03.801 13:49:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:03.801 13:49:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:04.366 13:49:17 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:31:04.366 13:49:17 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:31:04.366 13:49:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:04.366 13:49:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:04.366 13:49:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:04.366 13:49:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:04.366 13:49:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:04.625 13:49:17 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:31:04.625 13:49:17 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:31:04.625 13:49:17 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:31:04.625 13:49:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:31:04.884 13:49:17 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:31:04.884 13:49:17 keyring_file -- keyring/file.sh@1 -- # cleanup 00:31:04.884 13:49:17 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.H3CYqJ5Y4W /tmp/tmp.MLrz4XxXjg 00:31:04.884 13:49:17 keyring_file -- keyring/file.sh@20 -- # killprocess 100716 00:31:04.884 13:49:17 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 100716 ']' 00:31:04.884 13:49:17 keyring_file -- common/autotest_common.sh@950 -- # kill -0 100716 00:31:04.884 13:49:17 keyring_file -- common/autotest_common.sh@951 -- # uname 00:31:04.884 13:49:17 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:04.884 13:49:17 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100716 00:31:04.884 13:49:17 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:04.884 13:49:17 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:04.884 13:49:17 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100716' 00:31:04.884 killing process with pid 100716 00:31:04.884 13:49:17 keyring_file -- common/autotest_common.sh@965 -- # kill 100716 00:31:04.884 Received shutdown signal, test time was about 1.000000 seconds 00:31:04.884 00:31:04.884 Latency(us) 00:31:04.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:04.884 =================================================================================================================== 00:31:04.884 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:04.884 13:49:17 keyring_file -- common/autotest_common.sh@970 -- # wait 100716 00:31:04.884 13:49:17 keyring_file -- keyring/file.sh@21 -- # killprocess 100463 00:31:04.884 13:49:17 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 100463 ']' 00:31:04.884 13:49:17 keyring_file -- common/autotest_common.sh@950 -- # kill -0 100463 00:31:04.884 13:49:17 keyring_file -- common/autotest_common.sh@951 -- # uname 00:31:04.884 13:49:17 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:04.884 13:49:17 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100463 00:31:05.196 13:49:17 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:05.196 13:49:17 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:05.196 13:49:17 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100463' 00:31:05.196 killing process with pid 100463 00:31:05.196 13:49:17 keyring_file -- common/autotest_common.sh@965 -- # kill 100463 00:31:05.196 [2024-05-15 13:49:17.987420] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:05.196 [2024-05-15 13:49:17.987460] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:05.196 13:49:17 keyring_file -- common/autotest_common.sh@970 -- # wait 100463 00:31:05.455 00:31:05.455 real 0m15.254s 00:31:05.455 user 0m38.426s 00:31:05.455 sys 0m3.490s 00:31:05.455 13:49:18 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:05.455 13:49:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:05.455 ************************************ 00:31:05.455 END TEST keyring_file 00:31:05.455 ************************************ 00:31:05.455 13:49:18 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:31:05.455 13:49:18 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:31:05.455 13:49:18 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:31:05.455 13:49:18 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:31:05.455 13:49:18 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:31:05.455 13:49:18 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:31:05.455 13:49:18 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:31:05.455 13:49:18 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:31:05.455 13:49:18 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:31:05.455 13:49:18 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:05.455 13:49:18 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:31:05.455 13:49:18 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:31:05.455 13:49:18 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:31:05.455 13:49:18 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:31:05.455 13:49:18 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:05.455 13:49:18 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:05.455 13:49:18 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:31:05.455 13:49:18 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:31:05.455 13:49:18 -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:05.455 13:49:18 -- common/autotest_common.sh@10 -- # set +x 00:31:05.455 13:49:18 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:31:05.455 13:49:18 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:31:05.455 13:49:18 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:31:05.455 13:49:18 -- common/autotest_common.sh@10 -- # set +x 00:31:06.831 INFO: APP EXITING 00:31:06.831 INFO: killing all VMs 00:31:07.090 INFO: killing vhost app 00:31:07.090 INFO: EXIT DONE 00:31:07.657 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:07.657 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:07.657 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:08.604 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:08.604 Cleaning 00:31:08.604 Removing: /var/run/dpdk/spdk0/config 00:31:08.604 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:08.604 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:08.604 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:08.604 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:08.604 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:08.604 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:08.604 Removing: /var/run/dpdk/spdk1/config 00:31:08.604 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:08.604 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:08.604 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:08.604 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:08.604 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:08.604 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:08.604 Removing: /var/run/dpdk/spdk2/config 00:31:08.604 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:08.604 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:08.604 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:08.604 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:08.604 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:08.604 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:08.604 Removing: /var/run/dpdk/spdk3/config 00:31:08.604 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:08.604 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:08.604 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:08.604 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:08.604 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:08.604 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:08.604 Removing: /var/run/dpdk/spdk4/config 00:31:08.604 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:08.604 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:08.604 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:08.604 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:08.604 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:08.604 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:08.604 Removing: /dev/shm/nvmf_trace.0 00:31:08.604 Removing: /dev/shm/spdk_tgt_trace.pid71870 00:31:08.604 Removing: /var/run/dpdk/spdk0 00:31:08.604 Removing: /var/run/dpdk/spdk1 00:31:08.604 Removing: /var/run/dpdk/spdk2 00:31:08.604 Removing: /var/run/dpdk/spdk3 00:31:08.604 Removing: /var/run/dpdk/spdk4 00:31:08.604 Removing: /var/run/dpdk/spdk_pid100000 00:31:08.604 Removing: /var/run/dpdk/spdk_pid100035 00:31:08.604 Removing: /var/run/dpdk/spdk_pid100463 00:31:08.604 Removing: /var/run/dpdk/spdk_pid100468 00:31:08.604 Removing: /var/run/dpdk/spdk_pid100716 00:31:08.604 Removing: /var/run/dpdk/spdk_pid71726 00:31:08.604 Removing: /var/run/dpdk/spdk_pid71870 00:31:08.604 Removing: /var/run/dpdk/spdk_pid72064 00:31:08.604 Removing: /var/run/dpdk/spdk_pid72145 00:31:08.604 Removing: /var/run/dpdk/spdk_pid72178 00:31:08.604 Removing: /var/run/dpdk/spdk_pid72282 00:31:08.604 Removing: /var/run/dpdk/spdk_pid72298 00:31:08.604 Removing: /var/run/dpdk/spdk_pid72416 00:31:08.604 Removing: /var/run/dpdk/spdk_pid72601 00:31:08.604 Removing: /var/run/dpdk/spdk_pid72747 00:31:08.604 Removing: /var/run/dpdk/spdk_pid72812 00:31:08.604 Removing: /var/run/dpdk/spdk_pid72880 00:31:08.604 Removing: /var/run/dpdk/spdk_pid72958 00:31:08.604 Removing: /var/run/dpdk/spdk_pid73028 00:31:08.604 Removing: /var/run/dpdk/spdk_pid73061 00:31:08.604 Removing: /var/run/dpdk/spdk_pid73096 00:31:08.604 Removing: /var/run/dpdk/spdk_pid73158 00:31:08.604 Removing: /var/run/dpdk/spdk_pid73259 00:31:08.604 Removing: /var/run/dpdk/spdk_pid73691 00:31:08.604 Removing: /var/run/dpdk/spdk_pid73743 00:31:08.863 Removing: /var/run/dpdk/spdk_pid73789 00:31:08.863 Removing: /var/run/dpdk/spdk_pid73797 00:31:08.863 Removing: /var/run/dpdk/spdk_pid73864 00:31:08.863 Removing: /var/run/dpdk/spdk_pid73880 00:31:08.863 Removing: /var/run/dpdk/spdk_pid73947 00:31:08.863 Removing: /var/run/dpdk/spdk_pid73956 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74001 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74019 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74059 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74070 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74198 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74228 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74302 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74357 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74381 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74440 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74474 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74509 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74538 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74578 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74607 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74646 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74676 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74711 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74745 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74774 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74809 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74843 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74878 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74912 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74947 00:31:08.863 Removing: /var/run/dpdk/spdk_pid74976 00:31:08.863 Removing: /var/run/dpdk/spdk_pid75019 00:31:08.863 Removing: /var/run/dpdk/spdk_pid75051 00:31:08.863 Removing: /var/run/dpdk/spdk_pid75091 00:31:08.863 Removing: /var/run/dpdk/spdk_pid75121 00:31:08.863 Removing: /var/run/dpdk/spdk_pid75191 00:31:08.863 Removing: /var/run/dpdk/spdk_pid75284 00:31:08.863 Removing: /var/run/dpdk/spdk_pid75579 00:31:08.863 Removing: /var/run/dpdk/spdk_pid75591 00:31:08.864 Removing: /var/run/dpdk/spdk_pid75633 00:31:08.864 Removing: /var/run/dpdk/spdk_pid75641 00:31:08.864 Removing: /var/run/dpdk/spdk_pid75662 00:31:08.864 Removing: /var/run/dpdk/spdk_pid75681 00:31:08.864 Removing: /var/run/dpdk/spdk_pid75700 00:31:08.864 Removing: /var/run/dpdk/spdk_pid75710 00:31:08.864 Removing: /var/run/dpdk/spdk_pid75729 00:31:08.864 Removing: /var/run/dpdk/spdk_pid75748 00:31:08.864 Removing: /var/run/dpdk/spdk_pid75758 00:31:08.864 Removing: /var/run/dpdk/spdk_pid75783 00:31:08.864 Removing: /var/run/dpdk/spdk_pid75796 00:31:08.864 Removing: /var/run/dpdk/spdk_pid75812 00:31:08.864 Removing: /var/run/dpdk/spdk_pid75831 00:31:08.864 Removing: /var/run/dpdk/spdk_pid75844 00:31:08.864 Removing: /var/run/dpdk/spdk_pid75864 00:31:08.864 Removing: /var/run/dpdk/spdk_pid75883 00:31:08.864 Removing: /var/run/dpdk/spdk_pid75898 00:31:08.864 Removing: /var/run/dpdk/spdk_pid75913 00:31:08.864 Removing: /var/run/dpdk/spdk_pid75944 00:31:08.864 Removing: /var/run/dpdk/spdk_pid75957 00:31:08.864 Removing: /var/run/dpdk/spdk_pid75987 00:31:08.864 Removing: /var/run/dpdk/spdk_pid76051 00:31:08.864 Removing: /var/run/dpdk/spdk_pid76079 00:31:08.864 Removing: /var/run/dpdk/spdk_pid76089 00:31:08.864 Removing: /var/run/dpdk/spdk_pid76116 00:31:08.864 Removing: /var/run/dpdk/spdk_pid76127 00:31:08.864 Removing: /var/run/dpdk/spdk_pid76133 00:31:08.864 Removing: /var/run/dpdk/spdk_pid76177 00:31:08.864 Removing: /var/run/dpdk/spdk_pid76189 00:31:08.864 Removing: /var/run/dpdk/spdk_pid76219 00:31:08.864 Removing: /var/run/dpdk/spdk_pid76233 00:31:08.864 Removing: /var/run/dpdk/spdk_pid76238 00:31:08.864 Removing: /var/run/dpdk/spdk_pid76243 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76257 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76261 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76276 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76280 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76314 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76341 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76350 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76379 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76388 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76401 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76442 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76453 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76485 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76493 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76500 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76508 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76515 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76523 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76530 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76538 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76606 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76654 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76758 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76792 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76837 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76846 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76868 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76888 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76914 00:31:09.122 Removing: /var/run/dpdk/spdk_pid76935 00:31:09.122 Removing: /var/run/dpdk/spdk_pid77005 00:31:09.122 Removing: /var/run/dpdk/spdk_pid77021 00:31:09.122 Removing: /var/run/dpdk/spdk_pid77066 00:31:09.122 Removing: /var/run/dpdk/spdk_pid77126 00:31:09.122 Removing: /var/run/dpdk/spdk_pid77183 00:31:09.122 Removing: /var/run/dpdk/spdk_pid77206 00:31:09.122 Removing: /var/run/dpdk/spdk_pid77298 00:31:09.122 Removing: /var/run/dpdk/spdk_pid77335 00:31:09.122 Removing: /var/run/dpdk/spdk_pid77373 00:31:09.122 Removing: /var/run/dpdk/spdk_pid77586 00:31:09.122 Removing: /var/run/dpdk/spdk_pid77678 00:31:09.122 Removing: /var/run/dpdk/spdk_pid77712 00:31:09.122 Removing: /var/run/dpdk/spdk_pid78023 00:31:09.122 Removing: /var/run/dpdk/spdk_pid78061 00:31:09.122 Removing: /var/run/dpdk/spdk_pid78355 00:31:09.122 Removing: /var/run/dpdk/spdk_pid78764 00:31:09.122 Removing: /var/run/dpdk/spdk_pid79033 00:31:09.122 Removing: /var/run/dpdk/spdk_pid79798 00:31:09.122 Removing: /var/run/dpdk/spdk_pid80605 00:31:09.122 Removing: /var/run/dpdk/spdk_pid80716 00:31:09.122 Removing: /var/run/dpdk/spdk_pid80786 00:31:09.122 Removing: /var/run/dpdk/spdk_pid82046 00:31:09.122 Removing: /var/run/dpdk/spdk_pid82247 00:31:09.122 Removing: /var/run/dpdk/spdk_pid85446 00:31:09.122 Removing: /var/run/dpdk/spdk_pid85736 00:31:09.122 Removing: /var/run/dpdk/spdk_pid85844 00:31:09.122 Removing: /var/run/dpdk/spdk_pid85976 00:31:09.122 Removing: /var/run/dpdk/spdk_pid85992 00:31:09.122 Removing: /var/run/dpdk/spdk_pid86024 00:31:09.122 Removing: /var/run/dpdk/spdk_pid86040 00:31:09.122 Removing: /var/run/dpdk/spdk_pid86124 00:31:09.122 Removing: /var/run/dpdk/spdk_pid86259 00:31:09.122 Removing: /var/run/dpdk/spdk_pid86394 00:31:09.122 Removing: /var/run/dpdk/spdk_pid86461 00:31:09.122 Removing: /var/run/dpdk/spdk_pid86649 00:31:09.122 Removing: /var/run/dpdk/spdk_pid86738 00:31:09.122 Removing: /var/run/dpdk/spdk_pid86831 00:31:09.122 Removing: /var/run/dpdk/spdk_pid87136 00:31:09.122 Removing: /var/run/dpdk/spdk_pid87478 00:31:09.122 Removing: /var/run/dpdk/spdk_pid87481 00:31:09.122 Removing: /var/run/dpdk/spdk_pid89653 00:31:09.122 Removing: /var/run/dpdk/spdk_pid89661 00:31:09.380 Removing: /var/run/dpdk/spdk_pid89927 00:31:09.380 Removing: /var/run/dpdk/spdk_pid89942 00:31:09.380 Removing: /var/run/dpdk/spdk_pid89962 00:31:09.380 Removing: /var/run/dpdk/spdk_pid89992 00:31:09.380 Removing: /var/run/dpdk/spdk_pid90003 00:31:09.380 Removing: /var/run/dpdk/spdk_pid90085 00:31:09.380 Removing: /var/run/dpdk/spdk_pid90094 00:31:09.380 Removing: /var/run/dpdk/spdk_pid90202 00:31:09.380 Removing: /var/run/dpdk/spdk_pid90210 00:31:09.380 Removing: /var/run/dpdk/spdk_pid90318 00:31:09.380 Removing: /var/run/dpdk/spdk_pid90324 00:31:09.380 Removing: /var/run/dpdk/spdk_pid90702 00:31:09.380 Removing: /var/run/dpdk/spdk_pid90745 00:31:09.380 Removing: /var/run/dpdk/spdk_pid90829 00:31:09.380 Removing: /var/run/dpdk/spdk_pid90878 00:31:09.380 Removing: /var/run/dpdk/spdk_pid91164 00:31:09.380 Removing: /var/run/dpdk/spdk_pid91360 00:31:09.380 Removing: /var/run/dpdk/spdk_pid91734 00:31:09.380 Removing: /var/run/dpdk/spdk_pid92221 00:31:09.380 Removing: /var/run/dpdk/spdk_pid93038 00:31:09.380 Removing: /var/run/dpdk/spdk_pid93625 00:31:09.380 Removing: /var/run/dpdk/spdk_pid93627 00:31:09.380 Removing: /var/run/dpdk/spdk_pid95521 00:31:09.380 Removing: /var/run/dpdk/spdk_pid95574 00:31:09.380 Removing: /var/run/dpdk/spdk_pid95634 00:31:09.380 Removing: /var/run/dpdk/spdk_pid95687 00:31:09.380 Removing: /var/run/dpdk/spdk_pid95786 00:31:09.380 Removing: /var/run/dpdk/spdk_pid95843 00:31:09.380 Removing: /var/run/dpdk/spdk_pid95896 00:31:09.380 Removing: /var/run/dpdk/spdk_pid95944 00:31:09.380 Removing: /var/run/dpdk/spdk_pid96258 00:31:09.380 Removing: /var/run/dpdk/spdk_pid97407 00:31:09.380 Removing: /var/run/dpdk/spdk_pid97545 00:31:09.380 Removing: /var/run/dpdk/spdk_pid97782 00:31:09.380 Removing: /var/run/dpdk/spdk_pid98326 00:31:09.380 Removing: /var/run/dpdk/spdk_pid98485 00:31:09.380 Removing: /var/run/dpdk/spdk_pid98637 00:31:09.380 Removing: /var/run/dpdk/spdk_pid98734 00:31:09.380 Removing: /var/run/dpdk/spdk_pid98885 00:31:09.380 Removing: /var/run/dpdk/spdk_pid98994 00:31:09.380 Removing: /var/run/dpdk/spdk_pid99651 00:31:09.380 Removing: /var/run/dpdk/spdk_pid99682 00:31:09.380 Removing: /var/run/dpdk/spdk_pid99717 00:31:09.380 Removing: /var/run/dpdk/spdk_pid99970 00:31:09.380 Clean 00:31:09.380 13:49:22 -- common/autotest_common.sh@1447 -- # return 0 00:31:09.380 13:49:22 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:31:09.380 13:49:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:09.380 13:49:22 -- common/autotest_common.sh@10 -- # set +x 00:31:09.380 13:49:22 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:31:09.380 13:49:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:09.380 13:49:22 -- common/autotest_common.sh@10 -- # set +x 00:31:09.639 13:49:22 -- spdk/autotest.sh@383 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:09.639 13:49:22 -- spdk/autotest.sh@385 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:09.639 13:49:22 -- spdk/autotest.sh@385 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:09.639 13:49:22 -- spdk/autotest.sh@387 -- # hash lcov 00:31:09.639 13:49:22 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:09.639 13:49:22 -- spdk/autotest.sh@389 -- # hostname 00:31:09.639 13:49:22 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1701806725-069-updated-1701632595 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:09.898 geninfo: WARNING: invalid characters removed from testname! 00:31:36.451 13:49:47 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:38.390 13:49:51 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:40.925 13:49:53 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:43.454 13:49:56 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:46.031 13:49:58 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:47.964 13:50:00 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:50.501 13:50:03 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:50.501 13:50:03 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:50.501 13:50:03 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:50.501 13:50:03 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:50.501 13:50:03 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:50.501 13:50:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.501 13:50:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.501 13:50:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.501 13:50:03 -- paths/export.sh@5 -- $ export PATH 00:31:50.501 13:50:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.501 13:50:03 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:31:50.501 13:50:03 -- common/autobuild_common.sh@437 -- $ date +%s 00:31:50.501 13:50:03 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715781003.XXXXXX 00:31:50.501 13:50:03 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715781003.QfVpkt 00:31:50.501 13:50:03 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:31:50.501 13:50:03 -- common/autobuild_common.sh@443 -- $ '[' -n main ']' 00:31:50.501 13:50:03 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:31:50.501 13:50:03 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:31:50.501 13:50:03 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:31:50.501 13:50:03 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:31:50.501 13:50:03 -- common/autobuild_common.sh@453 -- $ get_config_params 00:31:50.501 13:50:03 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:31:50.501 13:50:03 -- common/autotest_common.sh@10 -- $ set +x 00:31:50.501 13:50:03 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:31:50.501 13:50:03 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:31:50.501 13:50:03 -- pm/common@17 -- $ local monitor 00:31:50.501 13:50:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:50.501 13:50:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:50.501 13:50:03 -- pm/common@21 -- $ date +%s 00:31:50.501 13:50:03 -- pm/common@25 -- $ sleep 1 00:31:50.501 13:50:03 -- pm/common@21 -- $ date +%s 00:31:50.501 13:50:03 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715781003 00:31:50.501 13:50:03 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715781003 00:31:50.501 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715781003_collect-cpu-load.pm.log 00:31:50.501 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715781003_collect-vmstat.pm.log 00:31:51.879 13:50:04 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:31:51.879 13:50:04 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:31:51.879 13:50:04 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:31:51.879 13:50:04 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:51.879 13:50:04 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:31:51.879 13:50:04 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:51.879 13:50:04 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:51.879 13:50:04 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:51.879 13:50:04 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:51.879 13:50:04 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:51.879 13:50:04 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:51.879 13:50:04 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:51.879 13:50:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:51.879 13:50:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:51.879 13:50:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:51.879 13:50:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:31:51.879 13:50:04 -- pm/common@44 -- $ pid=102284 00:31:51.879 13:50:04 -- pm/common@50 -- $ kill -TERM 102284 00:31:51.879 13:50:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:51.879 13:50:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:31:51.879 13:50:04 -- pm/common@44 -- $ pid=102286 00:31:51.879 13:50:04 -- pm/common@50 -- $ kill -TERM 102286 00:31:51.879 + [[ -n 5757 ]] 00:31:51.879 + sudo kill 5757 00:31:51.890 [Pipeline] } 00:31:51.907 [Pipeline] // timeout 00:31:51.912 [Pipeline] } 00:31:51.934 [Pipeline] // stage 00:31:51.939 [Pipeline] } 00:31:51.959 [Pipeline] // catchError 00:31:51.968 [Pipeline] stage 00:31:51.970 [Pipeline] { (Stop VM) 00:31:51.983 [Pipeline] sh 00:31:52.326 + vagrant halt 00:31:56.562 ==> default: Halting domain... 00:32:03.164 [Pipeline] sh 00:32:03.444 + vagrant destroy -f 00:32:07.635 ==> default: Removing domain... 00:32:07.648 [Pipeline] sh 00:32:07.932 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/output 00:32:07.941 [Pipeline] } 00:32:07.961 [Pipeline] // stage 00:32:07.966 [Pipeline] } 00:32:07.982 [Pipeline] // dir 00:32:07.987 [Pipeline] } 00:32:08.003 [Pipeline] // wrap 00:32:08.009 [Pipeline] } 00:32:08.024 [Pipeline] // catchError 00:32:08.033 [Pipeline] stage 00:32:08.035 [Pipeline] { (Epilogue) 00:32:08.050 [Pipeline] sh 00:32:08.343 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:14.993 [Pipeline] catchError 00:32:14.995 [Pipeline] { 00:32:15.010 [Pipeline] sh 00:32:15.292 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:15.552 Artifacts sizes are good 00:32:15.562 [Pipeline] } 00:32:15.582 [Pipeline] // catchError 00:32:15.595 [Pipeline] archiveArtifacts 00:32:15.602 Archiving artifacts 00:32:15.763 [Pipeline] cleanWs 00:32:15.775 [WS-CLEANUP] Deleting project workspace... 00:32:15.775 [WS-CLEANUP] Deferred wipeout is used... 00:32:15.781 [WS-CLEANUP] done 00:32:15.783 [Pipeline] } 00:32:15.800 [Pipeline] // stage 00:32:15.806 [Pipeline] } 00:32:15.823 [Pipeline] // node 00:32:15.829 [Pipeline] End of Pipeline 00:32:15.869 Finished: SUCCESS